COMP-90072作业代做、代写Python,c/c#编程设计作业、代做c/c++实验作业、…

Project guideThe Art of Scientific Computing:Complex Systems ProjectSubject Handbook code:COMP-90072Faculty of ScienceThe University of MelbourneSemester 1, 2019Subject co-ordinator: A/Prof. Roger RassoolContents1 Project 21.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Computational elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Progression plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Introduction 32.1 Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Sandpiles and statistics 43.1 The Bak-Tang-Wiesenfeld model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.2 The abelian property of sandpiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.3 Practical statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.4 Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Get coding 104.1 Setting up the sandpile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 Playing with the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Extensions 125.1 Extension: earthquake models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.2 Extension: lattice gas automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Assessment tasks 156.1 Tasks for your mid-semester report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156.2 Tasks for your final report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Further reading 1611 ProjectThis project will acquaint students with some of the computational elements at play in somenumerical simulations. To flesh out some interesting statistics (here, interesting means nonGaussian),this project begins with a simple sandpile model, in which very large avalanchescan occur more often than our usual statistical models would predict. The project will combinesimulations like these, conducted on a physical grid, and summarise the results as a series ofstatistical averages, which will then be analyses in kind. Students will then pursue their ownextension topic, applying lessons in programming, numerical statistics and critical behaviourto answer a question that simply cannot be handled with pen-and-paper mathematics alone.1.1 SoftwareThe purpose of this computational project is to understand some of the coding architecturebehind statistics-based simulations. To that end, we don’t want to obfuscate the functionaldetails of our sandpile programs but we don’t want to write random number generators fromscratch either. We’d like to avoid some high-level languages like Matlab or Mathematica, butcalling libraries within other languages is fine. We advise that you use one of: Python C or C# Fortran1.2 Computational elementsTo complete the project, the following numerical techniques are required: Random number generation and Monte Carlo methods. Data collection and organisation. Numerical integration of probability density functions.1.3 Progression planThis project consists of 4 steps that should be completed in order. The last of these stepsshould occupy about half of your semester’s work.1. Using a computer language of your choice, write a program that produces an N ×N arraywhich stores integer numbers, and a routine that can manipulate the sites in this arraygiven an address (i, j).2. Investigate ways to generate (pseudo-)random numbers, and apply what you learn to asandpile model with grains added to randowm sites. Add an option to your code whichdrops grains with a Gaussian probability distribution (co-centred on the sand pile).3. Write a toppling routine that searches through your sandpile appropriately and initiatesavalanches when necessary. Include to this routine (or to another place in your program)something that calculates the necessary statistics of an avalanche from start to finish(avalanche lifetime, avalanche size and so on).4. Based on your independent research, find an extension project of your own (or ask yourdemonstrator for ideas!). It must address a problem to which no closed-form answer exists,and you should follow your scientific curiousity down any avenues that open up. Try tostick to problems that can be modeled with an array, using random number generation, canbe analysed with (non-Gaussian) statistics, and exhibits some form of critical behaviour.22 IntroductionIntuitively, we like to think that catastrophic events have equally catastrophic causes. In physicsterms, we might postulate some unusual circumstances that perturb the energy of the systemproportionally to the resulting catastrophe. An easy example to think about is the extinctionof the dinosaurs – it was a very large and high-energy meteorite that impacted the Earth andset off a catastrophic chain of events. Or in more recent times, we might look to the GreatDepression, traced back to the Wall Street crash in October 1929. There are many examplesof big events with big causes, but we should not be led to believe that this is the only way dramaticchange can occur. Often we will be interested in large dissipative systems, usually withmany interconnected components. In systems like this, frequently we will find that system-levelchange hinges on the drop of pin or, as we will see in this prac, on a single grain of sand.The laws of physics are well defined and absolute. A naive physicist might end the storythere. Just write down the Hamiltonian or Lagrangian and you’re set! Unfortunately, forsystems with huge numbers of degrees of freedom and many interacting components, the equationsof motion become computationally intractable. Solving for the exact dynamics mighttake longer than the human race has left on earth. As a result, we need heuristic theories thatallow us to study the dynamics of complex systems. One such theory is that of self-organisingcriticality.Self-organising criticality was first proposed in 1987 by Bak, Tang and Wiesenfeld. Thetheory states that a large and complex system with short range interactions naturally evolvesinto a critical state. What does this mean? Well there are different definitions, but to put itsimply, a small perturbation to an element of a system in a critical state will result in a responsethat can affect any number of elements in the system. To put it simply-er, the same mechanismthat leads to a minor response can also lead to a catastrophic response. For a system to‘naturally evolve’ into this state, it must have some slow driving mechanism, and a means ofdissipation. For example, the Earth’s crust consists of tectonic plates, slowly moving againsteach other. The microscopic components can be thought of as rigid particles, each subject tostraining force. In this case the shearing movement of tectonic plates drives the system. Whenthe strain on one particle exceeds a critical value, the particle ‘slips’ and transfers the strain toits neighbouring particles. Depending on the state of the surrounding particles, this dissipationof energy could result in further particles slipping. This process, as one might imagine, canbecome extremely large (as in an earthquake) or may just involve a single event that slightlylowers the energy of the system.Since we cannot study exact system dynamics, we must rely on statistical approaches to getan insight into what is happening. In driven, critical systems like the ones we will be studying,catastrophic events are often not distributed, in a probabilistic sense, as we might expect. Toreally get a handle on what is happening we will need to enter the world of non-equilibriumstatistical mechanics – goodbye nice, normal distributions, and hello power law behaviour andscale-free networks. Non-equilibrium statistics underlie how we think about systems that displayself-oragnising criticality. In terms of the theory itself, one important thing to understandis the sense in which it is holistic. When a system is critical, microscopic mechanisms (local increasesin stress, for example) are not responsible for the macroscopic observables of the system(think earthquake size, duration, area etc). In particular the proportions of large and smallevents do not depend on the exact microscopic details of what is happening (where exactly localstress increases occur, for example). Consequentially, one cannot analyse the ‘inner workings’of a system at the smallest scale, and from there try to explain the effects occurring on a largerscale. In plain language, it is vain to hope that detailed understanding of a part of the systemwill translate into system-level understanding.3Self-oragnising criticality as a framework has been used to study an extraordinarily diverserange of systems. It has been used to characterise, analyse and predict earthquakes, financialmarkets, evolution and extinction events, pulsar glitches, neuronal behaviour and much more.2.1 Learning outcomes1. An introduction to phenomenological/heuristic modelling in the study of complex systems.2. Build, from the bottom-up, a physical simulation. Part of this involves being technicallyadept (being able to write a reasonably-optimised computer program and collect the datathat arises), and part of this involves being physically astute – recognising which physicalvariables are important and which ones are not. In turn, this speaks to the ability todecide what makes this kind of model ‘good’ or ‘bad’, or at least effective or ineffective.3. Introduce (broadly) the study of non-equilibrium statistical mechanics and specifically thenon-Gaussian statistics that underlie much of it.4. Introduce the idea of self-ordered criticality and 1/f noise, and through this, the appealof universality in dynamical systems.5. Invite students to study some specific areas of science, like granular flow or the mechanismsthat underlie earthquakes.6. Introduce some practical sampling techniques such as Monte Carlo methods and ergodicmeasurements.7. Introduce students to interdisciplinary thinking. By this, we mean being able to start offwith a mathematically-abstracted model of sandpiles and understand that with a changeof perspective it can be used to think about something like earthquakes too.8. Introduce some basic data science skills, like finding relevant data in the public domain,scraping this data from the web and manipulating it into a form that can be read by acomputer program, and then to analysing the modelled data with reference to the empiricaldata.9. Finally, although this prac can be completed in any coding language, it lends itself to ascripting language, so it is a good chance for students to use Matlab or learn Python.3 Sandpiles and statistics3.1 The Bak-Tang-Wiesenfeld modelWhen modelling a complex system, it is often impossible to create mathematical formalismthat is both sufficiently realistic and computationally or theoretically tractable. As a result,researchers must come up with simple models that capture the important elements of the physicalsystem in question. If a model is suitable, it is sometimes possible to extrapolate findingsto inform about various observables pertaining to the original physical system. For systemsexhibiting self-oragnising criticality, there exists a deceptively simple paradigm: the sandpile.There are a number of different styles of sandpile, but all share the fact that their dynamicsare governed by a set of very simple rules. The model we will be focusing on is called theBak-Tang-Wiesenfield model. It is composed of discrete time indices and finite-state cellularautomata.Consider a table surface, discretised into a N × N grid. At each site (i, j) on the table, anumber is assigned z(i, j, t) corresponding to the number of grains of sand present any given4time. Imagine initially the table is empty such that z(i, j, 0) = 0. At each timestep, a grain ofsand is added to a random site δ(i, j) such that the state Z(t) of the table at any given time canbe described as Z(t) = PtP(i,j)z(i, j, 0) + δ(i, j). The sandpile will continue to grow on thetable until it reaches a critical value, marking the instability of a particular site. In this case,if any site reaches or exceeds z(i, j, t) = 4, the pile will topple, losing four grains of sand, anddistributing them to its nearest neighbours. This process is called an avalanche, the propertiesof which will be the study of this lab. Because we are setting four grains as the instabilitythreshold, we say that the critical parameter is 4. The toppling operation is described below:z(i, j, t) = z(i, j, t) ? 4 (3.1)z(i ± 1, j, t) = z(i ± 1, j, t) + 1 (3.2)z(i, j ± 1, t) = z(i, j ± 1, t) + 1. (3.3)It is possible that a toppling operation will occur on the boundaries of the table, i.e., wheni ∈ (0, N) and/or j ∈ (0, N). In this case, the sand that would topple to a site off the table issimply deleted from the system.Question 1: Why is it important for the table to have finite boundaries? If we were droppingthe sand in a non-random way, how would this answer change?The system is allowed to evolve until the slow driving mechanism of adding sand is balancedby the dissipation mechanism of avalanching and sand falling off the table. That is, we let thesystem run until we reach statistically stationary state.Question 2: Explain the difference between a statistically stationary state, and an equilibriumstate. Is the sandpile in equilibrium? In terms of system observables, how might we characterisethis state?Avalanches are then recorded and statistics are built up about the distributions of theirvarious observables. To quantify an avalanche event, there are four main observables:1. The avalanche size S. This quantity is measured by how many grains of sand are displacedin a single avalanche event.2. The avalanche lifetime T. This is the number of timesteps it takes for an avalanche torelax the system to a critical state.3. Avalanche area A. The number of unique sites toppled in a given avalanche. (A 6= S).4. Avalanche radius R. This can be measured in a number of ways, but it is essentially ameasure of the distance from the original toppling site that the avalanche reaches. Forthe sake of consistency, we will define this as the maximum number of sites away from theinitial site the avalanche reaches.An example avalanche is shown in the figure below. The avalanche in question can becharacterised by the observables S = 16, T = 4, A = 4 and R = 2.Figure 1: An avalanche in progress.Question 3: Is there a difference between sampling a system over time and sampling an ensembleof systems? If so, under what circumstances? You might like to think about the ergodichypothesis here…53.2 The abelian property of sandpilesAs you might expect, the patterns and states that result from the Bak-Tang-Wiesenfield modeldon’t replicate the same dynamics of a read sandpile. There is a great deal more physics goingon in the real world analogue, and consequently the predictions of the model are quite different.This is an important point – the mathematical sandpile is inherently different to the physicalone. One of the main properties that makes the mathematical sandpile model so convenient towork with when studying self-oragnising criticality is its Abelian property. Consider a sandpilein the configurations in Figure (2). The sandpile undergoes an avalanche, and at the next timestep, there are two unstable sites. The question of which order should we topple the sites inis handled by the Abelian property: it doesn’t matter. In this example, it is easy to see bytesting the two cases manually, that the resulting configuration is the same regardless of whichsite is toppled first. However the situation can become non-trivial when toppling one site mightinduce further instabilities in the system that wouldn’t occur for the other site were it toppledfirst. This can be particularly challenging when trying to track the avalanche temporally aswell. Thus the aim of this section is to prove the Abelian property of the Bak-Tang Wiesenfieldsandpile model, and introduce some of the mathematical formalisms underlying it. From abroader perspective, this should demonstrate an example of the way that mathematical formalismcan still be helpful in the study of complex systems, even if our usual differential equationsare too complicated.Figure 2: An avalanche in progress which raises questions about abelian cascades.To start, let’s take some of the ideas presented in the previous section and represent themmore formally. Our ‘table’ is now represented by the object V , which is a finite subset of Zd,where d is the dimension of our model. For any site x, we introduce a configuration functionη(x) that maps η(x) : V → N, i.e., extracts the number of grains of sand at the position xon the table. The configuration η itself is therefore a subset of NV. Now, a configuration ηcan be either stable or unstable, depending on the size of any given η(x) in V at a given time.As you might expect, a stable configuration corresponds to a table with no elements greaterthan a critical parameter k, and an unstable configuration corresponds to that with at least onevalue η(x) ≥ k. To formally write this, we need to introduce the concept of the toppling matrix.The toppling matrix Vx,yis an operator that stabilises an unstable site x by distributingits elements to neighbouring sites. It takes two values, corresponding to two sites x, y ∈ V andupdates according to the height configuration of V .The toppling matrix must satisfy the following conditions:�6Question 4: For each matrix toppling condition, explain its significance and justify its necessity.The actual definition of the toppling matrix is given by:1. If x ∈ V then Vx,x = 2d.2. If x and y are nearest neighbours then Vx,y = 1.3. Otherwise, Vx,y = 0.Note that by this definition, our critical parameter k is equal to 2d, where d is the dimensionof the sandpile.Question 5: Explain and justify each equation in the definition of the toppling matrix.Now that we have a rigorous and general definition for the toppling matrix, we can definethe toppling operator Tx , which maps a configuration η ∈ NVto a new configuration η�Essentially, it chooses a site x ∈ V and alters it and its surroundings based on the value η(x),and the proximity of each site in V . Formally, this can be written as:�Question 6: Show that Tx commutes for unstable configurations.The above exercise is the first step in showing that this sandpile model is in fact abelian,and if avalanches were restricted to a single branching process, we would be done! However anavalanche begins on an unstable configuration and ends on a stable one, with the possibility ofmany topplings in between. For convenience, we introduce the set ?V to represent all stableheight configurations. Therefore, the toppling transformation T is the set of operations thatmaps an unstable configuration to a stable one:T : NV → V (3.9)Naturally, this can take multiple iterations of topplings. For example, the toppling transformationin Figure (1) would be given byηt=4 = Tηt=0 = T(2,3)T(1,3)T(1,2)T(2,2)ηt=0. (3.10)The general toppling transformation can be represented aswhere N is the number of instabilities throughout an avalanche. There are important points tobe made here. N must not be infinite, or the system can never again reach its self-organisingcritical state. This indicates the importance of boundary conditions, namely that there mustexist dissipative sites, such as those on the edges that remove sand from the system.Now that we have the underlying mathematical formalisms and basic theoretical work downpacked, the proof that the sand pile is indeed abelian is left as an exercise and test of understanding!Question 7: Prove that no matter which order we choose to perform the toppling events in,we will always topple the same sites the same number of times during an avalanche (and thusobtain the same final configuration).7The above question should be approached in the following way. Suppose that a certainconfiguration η has more than one unstable site. In that situation, the order of the topplings isnot fixed. Clearly, if we only topple site x and site y , the order of these two topplings doesn’tmatter and both orders yield the same result. (In the physics literature, this is often presentedas a proof that T is well defined.) But clearly, more is needed to guarantee this. The problemis that toppling x first, say, could possibly lead to a new unstable site z, which would neverhave become unstable if y had been toppled first.3.3 Practical statisticsIn this section we will give the physicist’s take on the important differences between differentprobability distribution functions, and then we will focus on power law distributions to talkabout scale free behaviour, rare events and 1/f noise. The probability distribution function(pdf) of a random variable quantifies the likelihood that a particular sample of the randomvariable will return a value within a given range. If x is the random variable, then its pdf p(x)is defined so that p(x) dx equals the probability that the returned x value lies between x andx+dx. Normalisation guarantees that the integral of the pdf over the whole domain of x is unity.The following exercises will invite you to work with this definition, and give you an idea ofthe use and abuse of probability in the world of finance, where practitioners often fail to accountproperly for the chance of catastrophic events. The data in these exercises was obtained fromWolfram Alpha (www.wolframalpha.com) and Index Fund Advisors (www.ifa.com).Question 8: Let’s assume that you buy into an S&P 500 index fund for a certain amount ofmoney. In some circumstances it is appropriate to assume that your return on investment ina month’s time will be normally distributed. Assume that the monthly returns for the last fiftyyears are independently distributed and have followed a normal distribution with mean 0.87%and standard deviation 4.33%. What is the pdf of expected percentage returns? What is thevariance? Assuming that I invested $10,000, what is the chance that I have more than $10,300a month later? What’s the chance that I have between $9000 and $10,000 now, a month later?Assume that in January 2009 the S&P lost roughly 11%. Under these assumptions, what’s thechance that we have another month that is as bad or worse than this? In February 2009 theS&P lost another 11%. Given the assumed independence, multiply through the probabilities toestimate the likelihood that we had two straight months as bad as they were. Convert this toa ‘We expect this bad luck once every n month pairs’ frequency statement. When we think ofother stock market crashes that have also occurred, clearly something is lost when we assumethat losses are uncorrelated in the mathematical models.Question 9: Using the same numbers as above, write a short computer program to work outwhat happens to the monthly returns after x months, answering the following questions. Youshould be thinking about histograms and Monte Carlo methods. What is the pdf of expectedreturn on a $10,000 investment after 2 years? Just the plot is fine here. Compare this with thepdf for 1 month returns. Assuming that I invested $10,000, what is the chance that I have morethan $10,300 two years later? What’s the chance that I have between $9000 and $10,000 now,two years later? From October 2007 and through to February 2009 (so 17 months) the S&Plost 52% in aggregate. Use your program to estimate the probability of this happening. Finally,if you bought into the S&P fund in March 2009, the return on investment after 9 months wouldhave been a positive 53%. Assuming only Gaussian influences at work, what is the likelihood ofthis? Convert your answers into rough (because the time periods are not exactly a year) ‘oncein a … years’ statements. Note how the Monte Carlo approach is handy here – if we wanted toanswer these kind of questions analytically, we would need to be thinking about Markov chainsand Brownian motion, and end up having to solve Chapman-Kolmogorov-esque equations. NB:You don’t need to include your code from this section – just explain what you did.8We find Gaussian or normal statistics intuitive because it’s pretty common for us to seestuff in the world around us that is clustered symmetrically around a mean value, and thatis exponentially suppressed the further away we get from that mean value. Because of thisubiquity, and because of their nice analytic properties, Gaussian distributions are a seductivefirst-pass when we come to model many situations.As you should have seen, with rare events, often this facile assumption doesn’t cut it. Wherethings go pear-shaped is usually in the tails of the distribution – the extreme events are inreality more common than the naive assumptions predict. One way to correct for this is to usea distribution with fatter tails, maybe a Cauchy distribution. We could also assume that atsome point in the the Gaussian distribution changes to a power law. Both of these correctionstake into account the relatively increased likelihood of extreme events. To see this, let’s imaginetwo different pdfs with mean 0 and variance aHere we have to take μ > 3 to get a sensible variance.Question 10: By writing another Monte Carlo program or by brute forcing the maths, plot thepdfs and then answer the following questions. Let y be the number of times we have to drawfrom the distribution before we get an a result greater than or equal to Y . For different valuesof Y expressed in terms of a sensible a value, give the resulting pdf, pY (y). In terms of thestandard deviation a in the normal distribution, how far into the tail of the power law p(x) dowe need to sample before we know that the probability of more extreme x values is less than5%? How does this compare to the 95% confidence level in the normal distribution? If youassumed that power laws more accurately modelled rare events in financial markets, how wouldthis knowledge affect your assessment of risk?As we can see, the power law has fatter tails, in the sense that you will sample very largemagnitude events much more frequently than you would if they were governed by a Gaussiandistribution. The main thing to understand about power laws is that they are the distributionwhere you can say that smaller (larger) events are more likely than larger (smaller) events, ina very straight-forward and specific sense, and that this relative likelihood is preserved as youmove through the spectrum, or as you ‘stretch’ or scale the statistics. Let’s take p(x) = ax?kso that x is the number of sites toppled in our sand pile model. What happens if we take x tocx, so we redefine an avalanche to occur over say c = 2 two sites? Nothing really, the relativestatistics are just shifted by a constant factor:�In fact, this is one way that we can get a feel for power law behaviour in the real world – ifwe see something with scaling statistics (like earthquakes!) we know that there is a power lawhiding somewhere. Equally, if we’re looking at financial time series and we see similar behaviourat different timescales, we should be wary of jumping straight into assumptions about normaldistributions. A lot of these ideas are formalised in mathematics, physics and computer sciencein the systematic study of scale-free networks.9Finally, we come to the raison detre of self-organising criticality – a unifying explanation ofso-called ‘1/f noise’. 1/f noise is a bit of an umbrella term for phenomena which are distributedaccording to a power law with exponent ?1. These phenomena typically depend heavily onthe past history in the system, and are therefore predictable in some sense, but are nonethelesssubject to chance and random fluctuations. From the 1960’s onwards, 1/f noise has been studiedfairly intensively, first in the context of metals and material science, and then in differentbiological and evolutionary contexts too. Because it was appearing in a lot of different places,it was and is tempting to think that there was some underlying pattern of events that just haddifferent physical expressions – in a philosophical sense, that there was some unifying frameworkthat could be used to explain disparate events. Sometimes you will see this temptationreferred to in the study of dynamical systems as ‘universality’ – the idea being that there’s aclass of systems that have broadly similar properties, independent of the detailed dynamics.Lest you think this is a wishy-washy proposition, it’s this kind of thinking that underlies andis formalised in studies of renormalisation and phase change (where you can compare waterboiling to iron magnetising, for example).The Bak-Tang-Weisenfeld model was proposed to provide an easy-to-understand and universaltemplate for phenomenon that could give rise to 1/f noise. Subsequent work has shown thatyou can map, in a precise and mathematically formal sense, systems which demonstrate 1/fnoise onto the sandpile model. Depending on your opinion of mathematical pre-eminence inthe world, we could say that self-oragnising critical systems demonstrate 1/f noise, or we couldsay that systems demonstrate 1/f noise because they are self-oragnising in some sense. As youstudy more physics, you might start to notice that this kind of mapping-onto-what-we-knowapproach is taken quite a bit – like for example with the Ising model in statistical physics.Question 11: Research and briefly describe the appearance of 1/f noise in a particular context– for example in nerve cells or oxide films.3.4 Characteristic functions? Describe the concept of this: f(k) = R +∞?∞ xkp(x) dx. Apply it to a Gaussian pdf and toa power-law pdf (which perhaps is Gaussian for small-ish x). Talk about how to generateany of these numerically from data.4 Get codingThe project will occur in three parts (plus an extension if you’re interested/time permitting).The first is setting up a computational model of the original Bak-Tang-Wiesenfeld sandpile,which was initially used to explain the self-oragnised-criticality exhibiting phenomenon ‘1/fnoise’. This initial part will essentially reproduce results from the paper P. Bak, C. Tang, K.Wiesenfeld, ‘Self Ordered Criticality’, PRA July 1988, while familiarising you with the model.The second part will involve playing with certain components of the model in order to changethe parameters it predicts, primarily the exponents of the power law distributions. Some of thesteps we undertake will increase the correspondence between our toy model and ‘real’ sandpile– it will be important to analyse the key differences.The final section is where you will cast your net more widely. You will obtain some realdata from trusted sources, and then make sure y本团队核心人员组成主要包括BAT一线工程师,精通德英语!我们主要业务范围是代做编程大作业、课程设计等等。我们的方向领域:window编程 数值算法 AI人工智能 金融统计 计量分析 大数据 网络编程 WEB编程 通讯编程 游戏编程多媒体linux 外挂编程 程序API图像处理 嵌入式/单片机 数据库编程 控制台 进程与线程 网络安全 汇编语言 硬件编程 软件设计 工程标准规等。其中代写编程、代写程序、代写留学生程序作业语言或工具包括但不限于以下范围:C/C++/C#代写Java代写IT代写Python代写辅导编程作业Matlab代写Haskell代写Processing代写Linux环境搭建Rust代写Data Structure Assginment 数据结构代写MIPS代写Machine Learning 作业 代写Oracle/SQL/PostgreSQL/Pig 数据库代写/代做/辅导Web开发、网站开发、网站作业ASP.NET网站开发Finance Insurace Statistics统计、回归、迭代Prolog代写Computer Computational method代做因为专业,所以值得信赖。如有需要,请加QQ:99515681 或邮箱:99515681@qq.com 微信:codehelp QQ:99515681 或邮箱:99515681@qq.com 微信:codehelp

你或许想:《去原作者写文章的地方

「点点赞赏,手留余香」

    还没有人赞赏,快来当第一个赞赏的人吧!
0 条回复 A 作者 M 管理员
    所有的伟大,都源于一个勇敢的开始!
欢迎您,新朋友,感谢参与互动!欢迎您 {{author}},您在本站有{{commentsCount}}条评论