In this project we aim to build an assessment delivery engine to the IMS Question and Test Interoperability version 2.1 specifications that can be deployed as a stand-alone web application or as part of a SOA enabled VLE. The engine will provide for: delivery of an assessment consisting of an assembly of QTI items, scheduling of assessments against users and groups, delivery of items using a web interface, including marking and feedback, and a Web service API for retrieving assessment results. In the second phase, the project will integrate with the other projects in this call on item banking and item authoring to provide a demonstrator, and will contribute to its evaluation and the evaluation of the project and its integration with the other projects under the Assessment call.
VERTIGO addresses a new generation of technologies and tools for modelling and testing embedded platforms, that will be the foundation for a viable and cost-efficient mapping of HW/SW systems embedded in intelligent devices. The contributions of the VERTIGO project include: (1) Definition of an HW/SW co-simulation strategy which allows to check the correctness of the interaction between HW and SW through simulation and assertion checking. (2) A verification framework that combines simulation with model checking to reduce verification time and increase coverage. (3) The development of techniques to separate the verification of IPs interaction from the verification of the IP-cores to facilitate efficient design-reuse. (4) Integration of formal and dynamic techniques in a seamless manner and federated around an assertion-based methodology and new coverage metrics. (5) Integrating several state-of-the-art static verification techniques (SAT, High-Level Decision Diagrams, Hierarchical Petri Nets, EFSMs) to the same platform. (6) Bounded and unbounded model checking capability based on SAT, by selecting and extending the most promising existing solutions. (7) The utilization of hybrid solvers in RTL unbounded model checking, by extending previous work on SAT technology and on SAT extensions.
The NOTOS project will conduct innovative research on a number of topics in SAT-based model checking, including novel uses of a number of key concepts: resolution proofs and a supporting resolution engine, incremental SAT and incremental model checking, and new uses of interpolants. In addition to the research contributions, the project also entails the development of NOTOS, a fully SAT-based model checker. NOTOS will integrate the most effective techniques for SAT-based model checking, and will seek to compete with the most widely used model checkers, NuSMV and SPIN. Finally, the project will assess the utilisation of the NOTOS model checker in a number of different contexts, including hardware and software systems, and security protocols.
Quality of Service (QoS) is a technology that counters the effects of congestion and queues in packet-switched networks, at the same time keeping the advantages of those networks such as high throughput and dynamic re-allocation of bandwidth between users. QoS divides network traffic into different classes during periods of congestion, and processes those classes in different ways depending upon their characteristics.
The e-Framework Reference Model for Assessment (FREMA) project has developed two Service Usage Models (SUMs). Summative On-line Assessment has many existing tools within this space, while Peer Review is a less well supported area. A distinction may be made between Peer Review, the marking of a studentââ¬â¢s work by their peers, and Peer Assessment, the marking of collaborative group work by a tutor (which may be modified by peer reflection). Peer Review is an important tool for giving feedback to large student cohorts when tutor time is limited, and is also an important learning activity in its own right. This project will undertake the development of an initial set of services from the Peer Review FREMA usecase, providing lightweight REST services, which may be reused within other group-oriented SUMs, to support the resource submission and distribution phases of Peer Review.
The project will produce an E-Framework toolkit that enables users to create course evaluation applications. Within UK Higher and Further Education, course evaluation is a well-defined component of teaching and learning, and after the domains of ââ¬ËLearning Designââ¬â¢ and of ââ¬ËAssessmentââ¬â¢ is probably the next most important. The toolkit will focus upon the authoring of evaluation questionnaires, their use by students, and the authoring of course reports by tutors based upon questionnaire answers.
The 3 CLIX project is a three year project. The partners are Microsoft Technology Centre (MTC), Microsoft Research Cambridge (MSR), and the University of Southampton. The project description from the DTI is:
ââ¬ÅThe ability for the knowledge worker to rapidly search and access information within the system like design performance data forms one of the key user requirements of a virtual design system. A measure of the usability of this type of system is the number of mouse clicks an engineer needs to perform in order to access the information he requires. We propose to investigate the functional requirements of an information/knowledge and process management system to enable the user to access information within 3 mouse clicks from search or request initiation to information retrieval, by technology development aimed at enhancing existing tools and techniques. These techniques will demonstrated by deployment on the Application Case Studies scenarios providing realistic feedback to guide further development.ââ¬?