![]() |
Epidemiology in a Hyperconnected World explores the evolving field of epidemiology in the digital age. The book emphasizes systems thinking, integrating biological, technological, and social factors to understand disease spread. Key themes include the crucial role of advanced modeling techniques (like those informed by chaos theory) and precision medicine, along with the transformative potential of open science and collaborative frameworks in outbreak response. The text also addresses the significant challenge of misinformation in a hyper-connected world and advocates for building more resilient public health systems. Ultimately, the book aims to provide a comprehensive understanding of modern epidemiology and its critical role in protecting public health in a rapidly changing world. |
Sunday, April 20
Epidemiology in a Hyperconnected World
Friday, April 18
AI's Pantheistic Fallacy
“The pantheistic fallacy assumes that more data equals perfect knowledge, but AI is constrained by the blind spots in its information, leading to flawed decisions in the real world.”
AI systems often operate under the assumption that the more data they process, the better their decision-making capabilities will become. This belief, which I call the "pantheistic fallacy," rests on the notion that AI has access to all the necessary information to solve any given problem. However, this assumption is deeply flawed. While AI can process massive volumes of data with incredible speed, it can only work with what it is given—meaning that unseen gaps, biases, and blind spots in the data remain hidden. These invisible limitations often lead to flawed conclusions, as the AI makes decisions based solely on the information it can measure, while ignoring what it cannot perceive.
“AI's greatest challenge isn't processing data—it’s accounting for the 'dark matter' of unseen variables that shape our world in unpredictable ways.”
AI is exceptional at recognizing patterns in data, but it faces significant challenges when dealing with the "dark matter" of unseen variables. Just as astrophysicists grapple with the mystery of dark matter in the universe—something they know exists but cannot directly observe—AI struggles to account for the unmeasured, unquantified factors that shape outcomes in the real world. This missing information can include anything from subtle emotional nuances in human behavior to environmental or situational factors that have not been captured in a dataset. Because of this blind spot, AI-generated insights, while appearing comprehensive, often lack the depth and granularity needed to navigate complex, unpredictable scenarios.
“Believing that AI has access to all the information it needs is dangerous. The future lies in building systems that recognize what they don’t know.”
The belief that AI possesses all the necessary data is not only false but also potentially dangerous. AI systems trained on incomplete datasets or lacking key variables can generate insights that are misleading or incomplete. The solution to this issue is not just providing AI with more data but also designing systems that can recognize their own limitations. AI must evolve to understand the boundaries of its knowledge and flag gaps in the data it analyzes. Systems that can acknowledge their own uncertainties will be far better suited to handle complex, real-world situations, particularly in critical fields such as healthcare, finance, and public policy.
“Data reflects the past, but AI must navigate an unpredictable future. The gap between known data and unseen variables limits AI's reliability.”
A key challenge for AI is that data typically reflects past occurrences, while AI systems are often expected to make decisions about the future. This creates a significant gap between what is known (and recorded in the dataset) and what remains unseen or unmeasurable. As a result, AI often generates predictions based on historical trends without accounting for the dynamic, evolving nature of the world it is trying to model. The reliance on past data can hinder AI’s ability to navigate new, unprecedented challenges, limiting its reliability when applied to real-world scenarios that require forward-thinking adaptability.
“True progress in AI will come not from amassing more data, but from building smarter systems that understand the boundaries of their knowledge.”
The future of AI does not lie in simply collecting more data. Instead, progress will come from building smarter systems that understand the limitations of their knowledge. AI systems must be designed to not only process large datasets but also to recognize when critical information is missing. By acknowledging the gaps in their understanding, AI can defer to human judgment or seek additional data before making a decision. This self-awareness in AI systems will lead to better, more reliable outcomes and ensure that AI becomes a true partner in solving complex, real-world problems.
“We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning.” – Werner Heisenberg
Werner Heisenberg's insight serves as a powerful reminder that AI, like any human inquiry, is limited by the data it processes and the questions it asks. The scope of AI’s understanding is constrained by the methods and datasets we provide. Therefore, for AI to move beyond its current limitations, we must widen its lens, allowing it to incorporate not only more diverse and complex data but also to critically analyze what it may be missing. This shift will be essential if AI is to evolve from a reactive tool into a proactive partner capable of navigating the unseen complexities of our world.
Friday, July 19
AI Governance: Creative Destruction without Pervasive Disruption
The Dawn of AI Research
At the dawn of AI research, pioneers like Alan Turing and John McCarthy laid the foundational theories that have paved the way for the rapid development we witness today. Turing's work on machine learning and computing, along with McCarthy's contributions to the development of AI programming languages and concepts, set the stage for the AI revolution. Claude Shannon's groundbreaking work in information theory provided a mathematical framework for understanding and designing intelligent systems, while Edward Lorenz's insights in mathematics, particularly chaos theory, influenced complex system modeling in AI.
Another crucial area that emerged early in AI research is the development of ontologies. Ontologies play a vital role in organizing knowledge in a way that AI systems can understand and utilize. They define the relationships between different concepts within a domain, enabling AI to process complex information more effectively. This work has been essential in areas like natural language processing and knowledge representation, contributing significantly to the evolution of intelligent systems.
AI has evolved from what was the domain of a limited set of visionary individuals. AI has expanded to include a multitude of researchers and technologists across a variety of disciplines. This collective effort mirrors how past innovations evolved from limited participation to an open collaborative model which ultimately drove significant advancements.
The Development and Impact of AI
AI has evolved significantly from its early days of basic algorithms and rule-based processing. In recent years we have seen a convergence of technological and mathematical innovations leading to today's AI systems, enriched by deep learning and neural networks, and capable of complex decision-making and pattern recognition. This represents an inflection point that is analogous to how past innovators have harnessed disruptive technologies to unlock their full potential, leading to applications that were once the realm of science fiction.
AI's impact is intense and varied across different fields. From automating mundane tasks to making significant breakthroughs in medical research, AI is proving to be a continuous source of innovation and efficiency. For instance, use cases of AI systems include diagnosing diseases, predicting weather patterns, enhancing security systems, and operating autonomous vehicles. These applications are only the beginning of the far-reaching impact that AI has in our daily lives.
Breakthroughs and Applications
One of the most significant breakthroughs in AI is the development of models like OpenAI's GPT series. These models have demonstrated unprecedented capabilities in understanding and generating human-like text, continuously producing intelligent responses and creative content. They have become invaluable tools in education, communication, and entertainment.
The rapid advancement of AI is impacting all branches of science and society. In physics, AI analyzes complex data sets, potentially leading to new discoveries. In chemistry, AI aids in the discovery of new compounds and materials. In biology, AI-driven analysis of genetic data is pushing the boundaries of personalized medicine.
Ethical and Societal Concerns
Historically, disruptive innovations have faced similar ethical and practical challenges. In medicine, the introduction of antibiotics revolutionized healthcare, saving countless lives. However, the early misuse and overprescription of antibiotics led to the development of resistant bacteria, posing significant health risks. Over time, through better regulation and more informed use have mitigated the early risks and challenges, enabling the benefits of antibiotics to become pervasive.
In manufacturing, the industrial revolution brought about significant advancements in production and efficiency. However, poor working conditions and environmental degradation are among the unintended consequences of the early years. Gradually, industry developed and implemented solutions for these issues, including labor reforms, regulatory measures, and technological advancements. Influential figures like Edward Deming played crucial roles in establishing quality control and safety standards through Total Quality Management (TQM). Deming’s principles helped create more efficient manufacturing processes while ensuring product quality and worker safety, thus transforming the industry.
In government, the adoption of digital technologies has transformed public administration and service delivery. Initially, concerns over data privacy and cybersecurity were significant obstacles. While these challenges still represent a risk, there has been progress, and these obstacles are less daunting than before. The implementation of robust security protocols and privacy regulations has enabled more efficient and transparent administration, enhancing citizen engagement and service delivery.
Like the debate over the potential dangers of past disruptive technologies, the rise of AI brings significant ethical and societal concerns. The potential misuse of AI, from privacy invasion to autonomous weaponry, echoes the fears once associated with other powerful innovations. We must ask ourselves whether humanity is ready to govern the profound capabilities of AI responsibly.
AI could become dangerous in the wrong hands, raising questions about whether humanity benefits from such powerful knowledge. Are we ready to profit from it, or will this knowledge be harmful? The example of past disruptive technologies is characteristic, as both the constructive and destructive potentials are immense.
The successful navigation of challenges with past innovations was not solely due to technological advancements. It required new ways of thinking about governance, best practices, and ethical considerations. By developing comprehensive frameworks, establishing regulatory bodies, and fostering public-private partnerships, society was able to harness the benefits of disruptive innovations while mitigating their risks.
Similarly, with AI, we must adopt a multifaceted approach that includes technological innovation, robust governance, and ethical stewardship to ensure that its development and application aligned with the broader goals of human well-being and societal progress.
Final Thoughts
As we navigate this AI revolution, we must learn from the history of disruptive innovations. Ensuring that our pursuit of knowledge remains aligned with ethical principles and the betterment of humanity is paramount. Just as past innovations have brought about tremendous advancements, the responsible development and application of AI holds the promise of a future where technology enhances and enriches human life. AI stands as a testament to human ingenuity and the collaborative spirit of innovation. By embracing its potential while remaining vigilant about its risks, we can harness AI to create a brighter, more equitable future for all.
Saturday, May 20
Unleashing Reliable Insights from Generative AI by Disentangling Language Fluency and Knowledge Acquisition
This deficiency can lead to mistakenly associating correlation with causation, reliance on incomplete or inaccurate data, and a lack of awareness regarding sensitive dependencies between information sets. With society’s increasing fascination with and dependence on Generative AI, there is a concern that the unintended consequence that it will have an unhealthy influence on shaping societal views on politics, culture, and science.
Humans acquire language and communication skills from a diverse range of sources, including raw, unfiltered, and unstructured content. However, when it comes to knowledge acquisition, humans typically rely on transparent, trusted, and structured sources. In contrast, large language models (LLMs) such as ChatGPT draw from an array of opaque, unattested sources of raw, unfiltered, and unstructured content for language and communication training. LLMs treat this information as the absolute source of truth used in their responses.
While this approach has demonstrated effectiveness in generating natural language, it also introduces inconsistencies and deficiencies in response integrity. While Generative AI can provide information it does not inherently yield knowledge.
To unlock the true value of generative AI, it is crucial to disaggregate the process of language fluency training from the acquisition of knowledge used in responses. This disaggregation enables LLMs to not only generate coherent and fluent language but also deliver accurate and reliable information.
However, in a culture that obsesses over information from self-proclaimed influencers and prioritizes virality over transparency and accuracy, distinguishing reliable information from misinformation and knowledge from ignorance has become increasingly challenging. This presents a significant obstacle for AI algorithms striving to provide accurate and trustworthy responses.
Generative AI shows great promise, but addressing the issue of ensuring information integrity is crucial for ensuring accurate and reliable responses. By disaggregating language fluency training from knowledge acquisition, large language models can offer valuable insights.
However, overcoming the prevailing challenges of identifying reliable information and distinguishing knowledge from ignorance remains a critical endeavour for advancing AI algorithms. It is essential to acknowledge that resolving this is an immediate challenge that needs open dialogue that includes a broad set of disciplines, not just technologists
Technology alone cannot provide a complete solution.
Monday, February 13
Solar Geoengineering – The Risks of Hacking our Climate
As humans, we think of ourselves as a single organism even though we are systems with trillions of microbes. Well, we live in a self-contained living organism with far greater complexity that we call Earth.
In the branch of mathematics known as Chaos Theory as applied to natural science, we study the sensitive dependencies between structural units that co-exist in an organism. These units exist together as dynamical systems whose apparently random states of disorder and irregularities are governed by underlying patterns and deterministic laws that are highly sensitive to conditions at any point across a time domain.
Essentially, it’s the idea that small changes in system can have significant and unpredictable consequences.
But in living organisms, change is inevitable and often amplified over time. A living organism will self-optimize based on these changes to achieve a balance required to survive and evolve. There is an underlying predictability in this optimization, however the massive scale of inter-relationships and sensitive dependencies between units at the quantum and macro scale makes it impossible for us to comprehend given our current knowledge of science. Even with the current state of technologies such as AI and supercomputers it is beyond our ability to predict.
We have only begun to understand Earth’s atmosphere and its sensitivity to external forces. What we do know is that the atmosphere is highly dynamic and complex. Any solar geoengineering experiment could not yield useful results unless it is done at sufficient scale, both geo and time. This is primarily due to the dynamic nature of the atmosphere. The significant variations in a small-scale experiment would effectively make measurements across a time domain inconclusive.
On the other hand, introducing new components into the atmosphere at scale changes its dynamics and introduces new sensitive dependencies which we do not have the knowledge to model or predict.
The Earth's climate is a complex system, and it is difficult to predict how it will respond to changes. Some areas where solar engineering could have a negative impact would be the ozone layer, weather patterns and Earth’s fragile micro-ecosystems.
Solar geoengineering involves reflecting some of the sun's incoming energy back into space by injecting reflective particles into the upper atmosphere. Some of these particles could react with ozone molecules in the stratosphere leading to the depletion of the ozone layer. This could increase increased exposure to harmful ultraviolet radiation, increasing the risk of skin cancer, cataracts, among other health problems.
Injecting particles into the upper atmosphere will be a highly inexact process where the variable distribution of particles could alter how solar radiation is distributed across the planet. This could disrupt regional weather patterns. For example, a reduction in solar radiation over the Artic could impact the jet stream, changing patterns over Europe and North American. We could see more extreme weather events such as extreme temperatures, bringing droughts in some areas, flooding in others.
It could also disrupt the delicate balance of Earth’s micro-ecosystems having unintended consequences on biodiversity. A reduction of solar radiation could impact the amount and timing of rainfall in some regions. This would negatively impact the growth and reproduction of plant species. Temperature and precipitation changes could also affect migratory patterns and behaviours of animals, which could lead to declines in biodiversity. The injection of reflective particles into the atmosphere could affect the growth and survival of phytoplankton in the oceans, which form the base of many marine food webs.
Solar geoengineering is not intended to be the solution for climate change, it is only an effort to temporarily mitigate the impact until we have a long-term solution. Once a long-term solution is pervasive, we will experience the "clean up" effect. By reversing solar geoengineering too quickly, we risk shocking our ecosystem since for many species and ecosystems adaptation would be impossible.
The risks with solar geoengineering – disrupting weather patterns, declines in biodiversity, shocks to our fragile ecosystems. Does this sound familiar? Isn’t this what we are trying to avoid by finding a solution to climate change?
Undertaking solar geoengineering at this point in the development of human knowledge is inappropriate and irresponsible. The unintended consequences of these actions could have a greater negative impact than the problem we all agree that needs to be solved.
Though, If I were ever to write a Sci-Fi / Horror novel, the topic of solar geoengineering opens many possibilities.