STEMReader is a tool for reading aloud mathematical equations using text-to-speech
This is a test project
According to Moore's law, the number of transistors on a micro-chip doubles every two years. Hence, the transistor size is expected to approach atomic scale in the near future due to our quest for miniaturization and more processing power. However, atomic level behaviour is governed by the laws of quantum physics, which are significantly different from those of classical physics. More explicitly, the inherent parallelism associated with quantum entities allows a quantum computer to carry out operations in parallel, unlike conventional computers. More significantly, quantum computers are capable of solving challenging optimization problems in a fraction of the time required by a conventional computer. However, the major impediment in the practical realization of quantum computers is the sensitivity of the quantum states, which collapse when they interact with their environment. Hence, powerful Quantum Error Correction (QEC) codes are needed for protecting the fragile quantum states from undesired influences and for facilitating the robust implementation of quantum computers. The inherent parallel processing capability of quantum computers will also be exploited to dramatically reduce the detection complexity in future generation communications systems.
In this work, we aim for jointly designing and ameliorating classical and quantum algorithms to support each other in creating powerful communications systems. More explicitly, the inherent parallelism of quantum computing will be exploited for mitigating the high complexity of classical detectors. Then, near-capacity QEC codes will be designed by appropriately adapting algorithms and design techniques used in classical Forward Error Correction (FEC) codes. Finally, cooperative communications involving both the classical and quantum domains will be conceived. The implementation of a quantum computer purely based on quantum-domain hardware and software is still an open challenge. However, a classical computer employing some quantum chips for achieving efficient parallel detection/processing may be expected to be implemented soon. This project is expected to produce a 'quantum-leap' towards the next-generation Internet, involving both classical and quantum information processing, for providing reliable and secure communications networks as well as affordable detection complexity.
Semantic data management refers to a range of techniques for the manipulation and usage of data based on its meaning. Semantically enabled linked and open data have been published at an increasing pace in recent years, and this technology has been adopted by major industrial players, including Google, Yahoo, Oracle, Talis and IBM. But to reach their full potential of becoming a transformative technology enabling a data-driven economy, there are important research challenges related to semantic data, particularly regarding maturity, dynamicity and the ability to process efficiently huge amounts of interconnected semantic data. SemData brings together some of the internationally leading research centres in the area of managing semantic data to address these challenges.
PlanetData aims to establish a sustainable European community of researchers that supports organizations in exposing their data in new and useful ways. The ability to effectively and efficiently make sense out of the enormous amounts of data continuously published online, including data streams, (micro)blog posts, digital archives, eScience resources, public sector data sets, and the Linked Open Data Cloud, is a crucial ingredient for Europe's transition to a knowledge society. It allows businesses, governments, communities and individuals to take decisions in an informed manner, ensuring competitive advantages, and general welfare. Research will concentrate on three key challenges that need to be addressed for effective data exposure in a usable form at global scale. We will provide representations for stream-like data, and scalable techniques to publish, access and integrate such data sources on the Web. We will establish mechanisms to assess, record, and, where possible, improve the quality of data through repair. To further enhance the usefulness of data - in particular when it comes to the effectiveness of data processing and retrieval - we will define means to capture the context in which data is produced and understood - including space, time and social aspects. Finally, we will develop access control mechanisms - in order to attract exposure of certain types of valuable data sets, it is necessary to take proper account of its owner's concerns to maintain control and respect for privacy and provenance, while not hampering non-contentious use. We will test all of the above on a highly scalable data infrastructure, supporting relational, RDF, and stream processing, and on novel data sets exposed through the network, and derive best practices for data owners. By providing these key precursors, complemented by a comprehensive training, dissemination, standardization and networking program, we will enable and promote effective exposure of data at planetary scale.
Within a few years the idea of open data has spread throughout Europe and produced extensive changes especially in thinking about governmental data and how it can be used for public and private purposes. This has also led to a fractured landscape of open data resources, making it challenging to gain an overview. Even on a national level there is an unconstrained growth of regional open data repositories. These circumstances cause extensive problems for policy makers, institutions and NGOs, to build up efficient and sustainable open data strategies. Further, identifying gaps in and opportunities for open data publishing is almost impossible. Finally, on a pan-European level there is a lack of analysis on this topic which hinders the large-scale reuse of open data.
OpenDataMonitor provides the possibility to gain an overview of available open data resources and undertake analysis and visualisation of existing data catalogues using innovative technologies. By creating a highly extensible and customizable harvesting framework, metadata from diverse open data sources will be collected. Through harmonization of the harvested metadata, the gathered information can be structured and processed. Scalable analytical and visualisation methods will allow the end users to learn more about the composition of regional, national or pan-European open data repositories. For example the aggregation of catalogues of one region or country can be easily visualised to draw an exact picture of the open data situation and allow comparison to other areas. Analysing and visualising metadata will reveal hidden potential and essential insights from existing resources and identify gaps where additional open data are needed.
To guarantee the availability, usage and reuse of the created plugins and components during the OpenDataMonitor project, established open source software like CKAN will be adopted and extended. The research outcomes and technical developments will be combined in a demonstration platform, integrated in third-party sites and will be distributed to the open data community to maximise impact.
Over the last decade, giant unilamellar vesicles (GUVs) have emerged as a valuable tool for the study of membrane domains, receptor and ion channel function, membrane morphology, and also as models for 'proto-cells'. GUVs are liposomes (lipid bilayers enclosing an aqueous volume) with a diameter of tens of micrometers. The large size of these structures enables optical imaging of membrane domains and of vesicle morphology as well as clamping of the vesicle with micropipettes, while the diffusion of lipids and membrane proteins is not restricted because there is no solid support layer. GUVs have become a popular research topic with the introduction of the electroformation protocol by Angelova and Dimitrov, who showed that some lipids, as a dried film on electrodes, formed GUVs when rehydrated with a salt-less or low-salt solution while an AC field is applied. This method, which is postulated to involve localized electroswelling of lipid bilayers, has been gradually optimized to allow GUV formation with a wider variety of lipid species and at more physiological (~100 mM) salt concentrations. The key change is that the lipids are deposited on the electrodes as small unilamellar liposomes (SUVs), which are partially but not completely dried prior to rehydration in the AC field. More recently, it has been demonstrated that the use of protein-containing liposomes (proteoSUVs) enables the formation of protein-containing GUVs. In this project, we will systematically explore this exciting development to derive standard procedures for microsystem-enabled semi-automated proteoGUV formation, and employ ion channel GUVs for drug screening with patch clamp methods.
This project will create a shared, facilitated learning environment in which social scientists, engineers, industrialists, policy makers and other stakeholders can research and learn together to understand how better to exploit the technical and market opportunities that emerge from the increased interdependence of infrastructure systems. The Centre will focus on the development and implementation of innovative business models and aims to support UK firms wishing to exploit them in international markets. The Centre will undertake a wide range of research activities on infrastructure interdependencies with users, which will allow problems to be discovered and addressed earlier and at lower cost. Because infrastructure innovations alter the social distribution of risks and rewards, the public needs to be involved in decision making to ensure business models and forms of regulation are socially robust. As a consequence, the Centre has a major focus on using its research to catalyse a broader national debate about the future of the UK's infrastructure, and how it might contribute towards a more sustainable, economically vibrant, and fair society.