Complex Systems Studies
2.32K subscribers
1.54K photos
122 videos
114 files
4.48K links
What's up in Complexity Science?!
Check out here:

@ComplexSys

#complexity #complex_systems #networks #network_science

📨 Contact us: @carimi
Download Telegram
“What are you thinking?” “#Entropy” “Entropy?” “Yeah, entropy.
Boris explained it. It’s why you can’t get the toothpaste back in the tube.”

Whatever works by Woody Allen
سخنرانی من تا یکساعت دیگر در همایش فیزیک و سامانه های اجتماعی
در صورت علاقه میتوانید برای شرکت به آدرس زیر مراجعه کرده و از یوزر و پسوررد زیر استفاده کنید
https://www.skyroom.online/ch/iut_farhangi/physics-ss

User: physics_ss
Password:pss1399
Forwarded from دِرَنـــگ (Keivan Aghababaei Samani)
🔷 باز هم داده‌های بزرگ

▪️خورخه لوئیس بورخس¹، نویسندهٔ آرژانتینی، در داستان کوتاه «فونس باحافظه»(*) ماجرای پسری را تعریف می‌کند که در‌ پی حادثهٔ سقوط از اسب، بی‌هوش می‌شود و پس از به‌هوش آمدن در‌می‌یابد که صاحب حافظه‌ای فوق‌العاده شده است، طوری‌که هرچه می‌بیند و می‌شنود با تمام جزئیات در خاطرش می‌ماند. اما درعین‌حال عاجز از درک این اطلاعات است. قدرت انتزاع و تعمیم ندارد. مثلاً برایش مشکل است که درک کند اصطلاح² «سگ»، نمونه‌های بسیاری از سگ‌ با اندازه‌ها و شکل‌های متفاوت را در بر می‌گیرد. حتی برایش آزاردهنده است که سگِ الآن همان اسم «سگ» یک‌دقیقه‌ٔ پیش را داشته باشد. از پردازش و دسته‌بندی اطلاعات عاجز است. خودش می‌گوید: «حافظه‌ٔ من مثل یک کیسه‌ٔ آشغال است». به‌همین دلیل اصولاً توان اندیشیدن ندارد، زیرا «اندیشیدن فراموش کردن تفاوت‌هاست و تعمیم‌دادن و تجرید». در پایان، فونس —احتمالاً زیر بار این‌همه اطلاعات انباشته شده— در بیست‌ویک سالگی خفه می‌شود و می‌میرد.

▪️ کالین بلیک‌مور³ در فصل چهارم کتاب «ساخت و کار ذهن»(**) تحت عنوان «فرزند لحظه‌ها»، به کارکرد حافظه به‌عنوان کلید اعمال عالی ذهن می‌پردازد. از قضا، در ضمن این بحث به داستان فونس باحافظه‌ هم اشاره می‌کند. در ادامه‌ٔ این فصل، بلیک‌مور بحث را به نقش خطرناک انسان در دست‌کاری و چپاول محیط خود می‌کشاند و سپس از «ذهن جمعی» انسان‌ها سخن به میان می‌آورد. یعنی این که کل انسان‌ها را هم‌چون یک مغز بزرگ در نظر بگیریم که می‌تواند اطلاعات را ثبت و پردازش کند. اما درعین‌حال خطر اصلی را دخالت در روند طبیعی تکامل نمی‌داند، بلکه می‌گوید آن‌چه خطرناک‌تر است احتمالاً این است که پس از اختراع دست‌گاه چاپ و وسایل ضبط مغناطیسی و حافظه‌های کامپیوتری برای ثبت و نگه‌داری اطلاعات، این ذهن جمعی توانایی حیاتی خود در فراموش کردن را از دست داده است. دیدگاه بلیک‌مور در پایان این فصل جالب‌توجه است:

«هم‌اکنون تکنولوژی موجود، که در کشورهای پیشرفته زندگی روزمره بر شالودهٔ آن قرار گرفته، به قدری پیچیده شده است که هیچ ذهنی به تنهایی قادر به درک آن نیست. ممکن است چنین رخ ندهد که انسان در اثر انفجاری که خود خلق کرده است از صفحهٔ روزگار محو شود. نیز ممکن است چنین پیش نیاید که در نتیجهٔ تباه ساختن منابع انرژیِ زمین، نژاد خود را از سرما منجمد گرداند. ولی ممکن است انسان خود را در چنان سیلابی از اطلاعات غرق کند که جامعه دیگر نتواند میراث فرهنگی خود را درک کند.» 

1. Jorge Luis Borges (1899-1986)
2. term
3. Colin Blakemore (1944- )

* این داستان در کتاب «کتابخانه‌ٔ بابل و ۲۳ داستان دیگر» ترجمهٔ کاوه سیدحسینی، انتشارات نیلوفر آمده است.
** این کتاب را انتشارات فرهنگ معاصر با ترجمه‌ٔ عالی و روان محمدرضا باطنی به فارسی منتشر کرده است.

@k1samani_channel
4⃣ Carlo Rovelli
Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, Reality Is Not What It Seems

☀️ Relative Information

Everybody knows what “information” is. It is the stuff that overabounds online; which you ask the airport kiosk when you don’t know how to get downtown; or which is stored in your USB sticks. It carries meaning. Meaning is interpreted in our head, of course. So, is there anything out there which is just physical, independent from our head, which is information?

Yes. It is called “relative information.” In nature, variables are not independent; for instance, in any magnet, the two ends have opposite polarities. Knowing one amounts to knowing the other. So we can say that each end “has information” about the other. There is nothing mental in this; it is just a way of saying that there is a necessary relation between the polarities of the two ends. We say that there is "relative information" between two systems anytime the state of one is constrained by the state of the other. In this precise sense, physical systems may be said to have information about one another, with no need for a mind to play any role.

Such "relative information" is ubiquitous in nature: The color of the light carries information about the object the light has bounced from; a virus has information about the cell it may attach; and neurons have information about one another. Since the world is a knit tangle of interacting events, it teams with relative information.

When this information is exploited for survival, extensively elaborated by our brain, and maybe coded in a language understood by a community, it becomes mental, and it acquires the semantic weight that we commonly attribute to the notion of information.

But the basic ingredient is down there in the physical world: physical correlation between distinct variables. The physical world is not a set of self-absorbed entities that do their selfish things. It is a tightly knitted net of relative information, where everybody’s state reflects somebody else’s state. We understand physical, chemical, biological, social, political, astrophysical, and cosmological systems in terms of these nets of relations, not in terms of individual behavior. Physical relative information is a powerful basic concept for describing the world. Before “energy,” “matter,” or even “entity.”

This is why saying that the physical world is just a collection of elementary particles does not capture the full story. The constraints between them create the rich web of reciprocal information.

Twenty-four centuries ago Democritus suggested that everything could be just made of atoms. But he also suggested that the atoms are “like the letters of the alphabet”: There are only twenty or so letters but, as he puts it, “It is possible for them to combine in diverse modes, in order to produce comedies or tragedies, ridiculous stories or epic poems.” So is nature: Few atoms combine to generate the phantasmagoric variety of reality. But the analogy is deeper: The atoms are like an alphabet because the way in which they are arranged is always correlated with the way other atoms are arranged. Sets of atoms carry information.

The light that arrives at our eyes carries information about the objects which it has played across; the color of the sea has information on the color of the sky above it; a cell has information about the virus attacking it; a new living being has plenty of information because it is correlated with its parents, and with its species; and you, dear reader, reading these lines, receive information about what I am thinking while writing them, that is to say, about what is happening in my mind at the moment in which I write this text. What occurs in the atoms of your brain is not any more independent from what is happening in the atoms of mine: we communicate.

The world isn’t just a mass of colliding atoms; it is also a web of correlations between sets of atoms, a network of reciprocal physical information between physical systems.
Ever sit and think to yourself: hello self. i know what we should be doing right now but what if *instead of that* we obsessively tweak the params of an edge bundling function for networkx/matplotlib??

wellp https://t.co/vCHrE5BLya

in progress, uses https://t.co/umEJeJlilG lots
Last few days (until June 5th) to submit your new abstract to #netsci2020 or to update your previous submissions. It will be a great online-only meeting, don't miss it! #netscisociety More info at https://t.co/HK6SM1Sqdf
This media is not supported in your browser
VIEW IN TELEGRAM
we can model a lightning strike by finding the shortest path in a random maze, from a point at the top to the ground. To find the path, we send out a frontier through the maze, and trace it back once it reaches the ground https://t.co/CsOsQ39hwS
💡 A forward-looking chapter by Mason Porter, "Nonlinearity + Networks: A 2020 Vision": https://t.co/eLdlD45oe8

It appears in the book "Emerging Frontiers in Nonlinear Science" (https://t.co/IBGX7bMyeA)
What makes a network complex?

In https://t.co/9r3GoZih0R
we show that many properties associated to complex networks are recovered by thresholding normally distributed data.
Media is too big
VIEW IN TELEGRAM
"Beyond Networks: The Evolution of Living Systems":

A very interesting lecture on evolution, ergodicity and predictability. From Laplace's hyper-deterministic views to Kauffman's non-ergodic universe and the adjacent possible (and more)

https://youtu.be/sTXBFT4Ptkk

https://www.aparat.com/v/PFR0A
Networked Complexity: The Case of COVID-19. June 8-11, 2020

https://www.aub.edu.lb/cams/Pages/Covid19.aspx

An online-conference as an occasion for presentations of work-in-progress on the gathering of epidemiological data (technical and ethical challenges), and its modeling (from the coarse grained compartmental, to the fine grained agent based models), with the urgency of COVID-19 mitigation in the air.
Networks beyond pairwise interactions: structure and dynamics

Federico Battiston, Giulia Cencetti, Iacopo Iacopini, Vito Latora, Maxime Lucas, Alice Patania, Jean-Gabriel Young, Giovanni Petri

🔗 Download PDF

The complexity of many biological, social and technological systems stems from the richness of the interactions among their units. Over the past decades, a great variety of complex systems has been successfully described as networks whose interacting pairs of nodes are connected by links. Yet, in face-to-face human communication, chemical reactions and ecological systems, interactions can occur in groups of three or more nodes and cannot be simply described just in terms of simple dyads. Until recently, little attention has been devoted to the higher-order architecture of real complex systems. However, a mounting body of evidence is showing that taking the higher-order structure of these systems into account can greatly enhance our modeling capacities and help us to understand and predict their emerging dynamical behaviors. Here, we present a complete overview of the emerging field of networks beyond pairwise interactions. We first discuss the methods to represent higher-order interactions and give a unified presentation of the different frameworks used to describe higher-order systems, highlighting the links between the existing concepts and representations. We review the measures designed to characterize the structure of these systems and the models proposed in the literature to generate synthetic structures, such as random and growing simplicial complexes, bipartite graphs and hypergraphs. We introduce and discuss the rapidly growing research on higher-order dynamical systems and on dynamical topology. We focus on novel emergent phenomena characterizing landmark dynamical processes, such as diffusion, spreading, synchronization and games, when extended beyond pairwise interactions. We elucidate the relations between higher-order topology and dynamical properties, and conclude with a summary of empirical applications, providing an outlook on current modeling and conceptual frontiers.
#PhD Please just drop me an email with your CV and one or two short paragraphs about your experience. Email https://t.co/GbOkb6VrBs
Advice to young scholars, Aaron Clauset

Panel 1. The Academic Job Market
Panel 2. Life / Work Balance
Panel 3. Interdisciplinary Research
Panel 4. Grants and Fundraising
The why, how, and when of representations for complex systems

Leo Torres, Ann S. Blevins, Danielle S. Bassett, Tina Eliassi-Rad

Download PDF

Complex systems thinking is applied to a wide variety of domains, from neuroscience to computer science and economics. The wide variety of implementations has resulted in two key challenges: the progenation of many domain-specific strategies that are seldom revisited or questioned, and the siloing of ideas within a domain due to inconsistency of complex systems language. In this work we offer basic, domain-agnostic language in order to advance towards a more cohesive vocabulary. We use this language to evaluate each step of the complex systems analysis pipeline, beginning with the system and data collected, then moving through different mathematical formalisms for encoding the observed data (i.e. graphs, simplicial complexes, and hypergraphs), and relevant computational methods for each formalism. At each step we consider different types of \emph{dependencies}; these are properties of the system that describe how the existence of one relation among the parts of a system may influence the existence of another relation. We discuss how dependencies may arise and how they may alter interpretation of results or the entirety of the analysis pipeline. We close with two real-world examples using coauthorship data and email communications data that illustrate how the system under study, the dependencies therein, the research question, and choice of mathematical representation influence the results. We hope this work can serve as an opportunity of reflection for experienced complexity scientists, as well as an introductory resource for new researchers.