Search results for: smart camera networks
73 A Qualitative Anthropological Analysis of Competing Health Perceptions in Chagas-Related Consultations in Non-Endemic Geneva
Authors: Marina Gold, Yves Jackson, David Parrat
Abstract:
The high predominance of Latin American migrants in Geneva from countries where Chagas disease is endemic (Bolivia, Brazil, Argentina, Colombia) is increasing the incidence of chronic Chagas-related problems, especially cardiovascular complications. The precarious migratory status of what are mostly undocumented migrants complicates access to health and affects patients’ and doctors’ health perceptions regarding screening, treatment and monitoring of Chagas-related health concerns. This project results from a 3 year collaboration between the Geneva University Hospital and the NGO Mundo Sano to understand the following questions: 1) how do Latin American migrants perceive their health? 2) What do they understand from Chagas disease? 3) Are patients’ and doctors’ health perceptions similar or do they have competing agendas? This paper aims to present the results of a long-term study that interrogates health perceptions among Latin American migrants in Geneva. The first phase consisted in completing surveys at three community screening events (2016, 2017. 2018), and the results of these surveys reveal the subordination of the importance of health to that of having met economic family obligation. That is, health is important only when it becomes an impediment to economic gain. The contradictory result emerged that people are aware of the importance of health prevention in order to ensure long-term health, but they do not always have agency over their life-style habits (healthy food, regular exercise, emotional stability). The second phase of the research collected open-ended interviews with selected participants, in order to explore in more detail how Latin American migrants deal with Chagas in a different socio-political and economic context to that of endemic countries. These interviews (5 in total) reveal mixed methods of managing health: social networks, access to health care transnationally (in Geneva, Spain and back in their home country), and different valuations of health problems in each situation. The third phase consisted in observations of doctor-patient consultations and further extended interviews with patients to determine doctor/patient health perceptions around Chagas disease. This phase is ongoing, but it has yielded preliminarily observations regarding the expectations that patients’ have of doctors, and the understanding of doctors’ to patients’ complex situations. Positive and complementary health perceptions include patients’ feeling that doctors in Geneva are more understanding, more knowledgeable and less racist than those in their home country, who do not provide detailed information about Chagas or its treatment and discriminate against them for being indigenous or from poor rural areas, enabling a better communication between doctors and patients. Possible conflicting health perceptions include patients addressing their health concerns more holistically and encountering the specialist’s limitations to only treating one health concern, given time limitations and lack of competition with their colleagues (the general practitioner that referred the patient, for example). The implications of this study extend the case of Chagas disease in Geneva and is relevant for all chronic concerns and migratory contexts of precarity.Keywords: chagas disease, health perceptions, Latin American Migrants, non-endemic countries
Procedia PDF Downloads 11972 Media, Myth and Hero: Sacred Political Narrative in Semiotic and Anthropological Analysis
Authors: Guilherme Oliveira
Abstract:
The assimilation of images and their potential symbolism into lived experiences is inherent. It is through this exercise of recognition via imagistic records that the questioning of the origins of a constant narrative stimulated by the media arises. The construction of the "Man" archetype and the reflections of active masculine imagery in the 21st century, when conveyed through media channels, could potentially have detrimental effects. Addressing this systematic behavioral chronology of virile cisgender, permeated imagistically through these means, involves exploring potential resolutions. Thus, an investigation process is initiated into the potential representation of the 'hero' in this media emulation through idols contextualized in the political sphere, with the purpose of elucidating the processes of simulation and emulation of narratives based on mythical, historical, and sacred accounts. In this process of sharing, the narratives contained in the imagistic structuring offered by information dissemination channels seek validation through a process of public acceptance. To achieve this consensus, a visual set adorned with mythological and sacred symbolisms adapted to the intended environment is promoted, thus utilizing sociocultural characteristics in favor of political marketing. Visual recognition, therefore, becomes a direct reflection of a cultural heritage acquired through lived human experience, stimulated by continuous representations throughout history. Echoes of imagery and narratives undergo a constant process of resignification of their concepts, sharpened by their premises, and adapted to the environment in which they seek to establish themselves. Political figures analyzed in this article employ the practice of taking possession of symbolisms, mythological stories, and heroisms and adapt their visual construction through a continuous praxis of emulation. Thus, they utilize iconic mythological narratives to gain credibility through belief. Utilizing iconic mythological narratives for credibility through belief, the idol becomes the very act of release of trauma, offering believers liberation from preconceived concepts and allowing for the attribution of new meanings. To dissolve this issue and highlight the subjectivities within the intention of the image, a linguistic, semiotic, and anthropological methodology is created. Linguistics uses expressions like 'Blaming the Image' to create a mechanism of expressive action in questioning why to blame a construction or visual composition and thus seek answers in the first act. Semiotics and anthropology develop an imagistic atlas of graphic analysis, seeking to make connections, comparisons, and relations between modern and sacred/mystical narratives, emphasizing the different subjective layers of embedded symbolism. Thus, it constitutes a performative act of disarming the image. It creates a disenchantment of the superficial gaze under the constant reproduction of visual content stimulated by virtual networks, enabling a discussion about the acceptance of caricatures characterized by past fables.Keywords: image, heroic narrative, media heroism, virile politics, political, myth, sacred performance, visual mythmaking, characterization dynamics
Procedia PDF Downloads 5071 Universal Health Coverage 2019 in Indonesia: The Integration of Family Planning Services in Current Functioning Health System
Authors: Fathonah Siti, Ardiana Irma
Abstract:
Indonesia is currently on its track to achieve Universal Health Coverage (UHC) by 2019. The program aims to address issues on disintegration in the implementation and coverage of various health insurance schemes and fragmented fund pooling. Family planning service is covered as one of benefit packages under preventive care. However, little has been done to examine how family planning program are appropriately managed across levels of governments and how family planning services are delivered to the end user. The study is performed through focus group discussion to related policy makers and selected programmers at central and district levels. The study is also benefited from relevant studies on family planning in the UHC scheme and other supporting data. The study carefully investigates some programmatic implications when family planning is integrated in the UHC program encompassing the need to recalculate contraceptive logistics for beneficiaries (eligible couple); policy reformulation for contraceptive service provision including supply chain management; establishment of family planning standard of procedure; and a call to update Management Information System. The study confirms that there is a significant increase in the numbers of contraceptive commodities needs to be procured by the government. Holding an assumption that contraceptive prevalence rate and commodities cost will be as expected increasing at 0.5% annually, the government need to allocate almost IDR 5 billion by 2019, excluded fee for service. The government shifts its focus to maintain eligible health facilities under National Population and Family Planning Board networks. By 2019, the government has set strategies to anticipate the provision of family planning services to 45.340 health facilities distributed in 514 districts and 7 thousand sub districts. Clear division of authorities has been established among levels of governments. Three models of contraceptive supply planning have been developed and currently in the process of being institutionalized. Pre service training for family planning services has been piloted in 10 prominent universities. The position of private midwives has been appreciated as part of the system. To ensure the implementation of quality and health expenditure control, family planning standard has been established as a reference to determine set of services required to deliver to the clients properly and types of health facilities to conduct particular family planning services. Recognition to individual status of program participation has been acknowledged in the Family Enumeration since 2015. The data is precisely recorded by name by address for each family and its members. It supplies valuable information to 15.131 Family Planning Field Workers (FPFWs) to provide information and education related to family planning in an attempt to generate demand and maintain the participation of family planning acceptors who are program beneficiaries. Despite overwhelming efforts described above, some obstacles remain. The program experiences poor socialization and yet removes geographical barriers for those living in remote areas. Family planning services provided for this sub population conducted outside the scheme as a complement strategy. However, UHC program has brought remarkable improvement in access and quality of family planning services.Keywords: beneficiary, family planning services, national population and family planning board, universal health coverage
Procedia PDF Downloads 18970 Parents’ Perceptions of the Consent Arrangements for Dental Public Health Programmes in North London: A Qualitative Exploration
Authors: Charlotte Jeavons, Charitini Stavropoulous, Nicolas Drey
Abstract:
Background: Over one-third of five-year-olds and almost half of all eight-year-olds in the UK have obvious caries experience that can be detected by visual screening techniques. School-based caries preventions programs to apply fluoride varnish to young children’s teeth operate in many areas in the UK. Their aim is to reduce dental caries in children. The Department of Health guidance (2009) on consent states information must be provided to parents to enable informed autonomous decision-making prior to any treatment involving their young children. Fluoride varnish schemes delivered in primary schools use letters for this purpose. Parents are expected to return these indicating their consent or refusal. A large proportion of parents do not respond. In the absence of positive consent, these children are excluded from the program. Non-response is more common in deprived areas creating inequality. The reason for this is unknown. The consent process used is underpinned by the ethical theory of deontology that is prevalent in clinical dentistry and widely accepted in bio-ethics. Objective: To investigate parents’ views, understanding and experience of the fluoride varnish program taking place in their child’s school, including their views about the practical consent arrangements. Method: Schools participating in the fluoride varnish scheme operating in Enfield, North London, were asked to take part. Parents with children in nursery, reception, or year one were invited to participate via semi-structured interviews and focus groups. Thematic analysis was conducted. Findings: 40 parents were recruited from eight schools. The global theme of ‘trust’ was identified as the strongest influence on parental responses. Six themes were identified; protecting children from harm is viewed by parents as their role, parents have the capability to decide but lack confidence, sharing responsibility for their child’s oral health with the State is welcomed by a parent, existing relationships within parents’ social networks strongly influences consent decisions, official dental information is not communicated effectively, sending a letter to parents’ and excluding them from meeting dental practitioners is ineffective. The information delivered via a letter was not strongly identified by parents as influencing their response. Conclusions: Personal contact with the person(s) providing information and requesting consent has a greater impact on parental consent responses than written information provided alone. This demonstrates that traditional bio-ethical ideas about rational decision-making where emotions are transcended and interference is not justified unless preventing harm to an unaware person are outdated. Parental decision-making is relational and the consent process should be adapted to reflect this. The current system that has a deontology view of decision making at its core impoverishes parental autonomy and may, ultimately, increase dental inequalities as a result.Keywords: consent, decision, ethics, fluoride, parents
Procedia PDF Downloads 17169 Role of Civil Society Institutions in Promoting Peace and Pluralism in the Rural, Mountainous Region of Pakistan
Authors: Mir Afzal
Abstract:
Introduction: Pakistan is a country with an ever-increasing population of largely diverse ethnic, cultural, religious and sectarian divisions. Whereas diversity is seen as a strength in many societies, in Pakistan, it has become a source of conflict and more a weakness than a strength due to lack of understanding and divisions based on ethnic, cultural, political, religious, and sectarian branding. However, amid conflicts and militancy across the country, the rural, mountainous communities in the Northern Areas of Pakistan enjoy not only peace and harmony but also a continuous process of social and economic transformation supported by strong civil society institutions. These community-based institutions have organized the rural, mountainous people of diverse ethnic and religious backgrounds into village organizations, women organizations, and Local Support Organizations engaged in self-help development and peace building in the region. The Study and its Methodology: A qualitative study was conducted in one district of the Northern Pakistan to explore the contributions of the civil society institutions (CSIs) and community-based organizations to uplifting the educational and socio-economic conditions of the people with an ultimate aim of developing a thriving, peaceful and pluralistic society in this mountainous region. The study employed an eclectic set of tools, including interviews, focused group discussions, observations of CSIs’ interventions, and analysis of documents, to generate rich data on the overall role and contributions of CSIs in promoting peace and pluralism in the region. Significance of the Study: Common experiences and empirical studies reveal that such interventions by CSIs have not only contributed to the socio-economic, educational, health and cultural development of these regions but these interventions have really transformed the rural, mountainous people into organized and forward looking communities. However, how such interventions have contributed to promoting pluralism and appreciation for diversity in these regions had been an unexplored but significant area. Therefore this qualitative research study funded by the Higher Education Commission of Pakistan was carried out by the Aga Khan University Institute for Educational Development to explore the role and contributions of CSIs in promoting peace and pluralism and appreciations for diversity in one district of Northern Pakistan which is home to people of different ethnic, religious, cultural and social backgrounds. Findings and Conclusions: The study has a comprehensive list of findings and conclusions covering various aspects of CSIs and their contributions to the transformation and peaceful co-existence of rural communities in the regions. However, this paper discusses only four major contributions of CSIs, namely enhancing economic capacity, community mobilization and organization, increasing access and quality of education, and building partnerships. It also discusses the factors influencing the role of CSIs, the issues, implications, and recommendations for CSIs, policy makers, donors and development agencies, and researchers. The paper concludes that by strengthening strong networks of CSIs and community based organizations, Pakistan will not only uplift its socio-economic attainments but it will also be able to address the critical challenges of terrorism, sectarianism, and other divisions and conflicts in its various regions.Keywords: civil society, Pakistan, peace, rural
Procedia PDF Downloads 52168 Recognizing Human Actions by Multi-Layer Growing Grid Architecture
Authors: Z. Gharaee
Abstract:
Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance
Procedia PDF Downloads 15767 Expanding Access and Deepening Engagement: Building an Open Source Digital Platform for Restoration-Based Stem Education in the Largest Public-School System in the United States
Authors: Lauren B. Birney
Abstract:
This project focuses upon the expansion of the existing "Curriculum and Community Enterprise for the Restoration of New York Harbor in New York City Public Schools" NSF EHR DRL 1440869, NSF EHR DRL 1839656 and NSF EHR DRL 1759006. This project is recognized locally as “Curriculum and Community Enterprise for Restoration Science,” or CCERS. CCERS is a comprehensive model of ecological restoration-based STEM education for urban public-school students. Following an accelerated rollout, CCERS is now being implemented in 120+ Title 1 funded NYC Department of Education middle schools, led by two cohorts of 250 teachers, serving more than 11,000 students in total. Initial results and baseline data suggest that the CCERS model, with the Billion Oyster Project (BOP) as its local restoration ecology-based STEM curriculum, is having profound impacts on students, teachers, school leaders, and the broader community of CCERS participants and stakeholders. Students and teachers report being receptive to the CCERS model and deeply engaged in the initial phase of curriculum development, citizen science data collection, and student-centered, problem-based STEM learning. The BOP CCERS Digital Platform will serve as the central technology hub for all research, data, data analysis, resources, materials and student data to promote global interactions between communities, Research conducted included qualitative and quantitative data analysis. We continue to work internally on making edits and changes to accommodate a dynamic society. The STEM Collaboratory NYC® at Pace University New York City continues to act as the prime institution for the BOP CCERS project since the project’s inception in 2014. The project continues to strive to provide opportunities in STEM for underrepresented and underserved populations in New York City. The replicable model serves as an opportunity for other entities to create this type of collaboration within their own communities and ignite a community to come together and address the notable issue. Providing opportunities for young students to engage in community initiatives allows for a more cohesive set of stakeholders, ability for young people to network and provide additional resources for those students in need of additional support, resources and structure. The project has planted more than 47 million oysters across 12 acres and 15 reef sites, with the help of more than 8,000 students and 10,000 volunteers. Additional enhancements and features on the BOP CCERS Digital Platform will continue over the next three years through funding provided by the National Science Foundation, NSF DRL EHR 1759006/1839656 Principal Investigator Dr. Lauren Birney, Professor Pace University. Early results from the data indicate that the new version of the Platform is creating traction both nationally and internationally among community stakeholders and constituents. This project continues to focus on new collaborative partners that will support underrepresented students in STEM Education. The advanced Digital Platform will allow for us connect with other countries and networks on a larger Global scale.Keywords: STEM education, environmental restoration science, technology, citizen science
Procedia PDF Downloads 8666 The Relevance of Community Involvement in Flood Risk Governance Towards Resilience to Groundwater Flooding. A Case Study of Project Groundwater Buckinghamshire, UK
Authors: Claude Nsobya, Alice Moncaster, Karen Potter, Jed Ramsay
Abstract:
The shift in Flood Risk Governance (FRG) has moved away from traditional approaches that solely relied on centralized decision-making and structural flood defenses. Instead, there is now the adoption of integrated flood risk management measures that involve various actors and stakeholders. This new approach emphasizes people-centered approaches, including adaptation and learning. This shift to a diversity of FRG approaches has been identified as a significant factor in enhancing resilience. Resilience here refers to a community's ability to withstand, absorb, recover, adapt, and potentially transform in the face of flood events. It is argued that if the FRG merely focused on the conventional 'fighting the water' - flood defense - communities would not be resilient. The move to these people-centered approaches also implies that communities will be more involved in FRG. It is suggested that effective flood risk governance influences resilience through meaningful community involvement, and effective community engagement is vital in shaping community resilience to floods. Successful community participation not only uses context-specific indigenous knowledge but also develops a sense of ownership and responsibility. Through capacity development initiatives, it can also raise awareness and all these help in building resilience. Recent Flood Risk Management (FRM) projects have thus had increasing community involvement, with varied conceptualizations of such community engagement in the academic literature on FRM. In the context of overland floods, there has been a substantial body of literature on Flood Risk Governance and Management. Yet, groundwater flooding has gotten little attention despite its unique qualities, such as its persistence for weeks or months, slow onset, and near-invisibility. There has been a little study in this area on how successful community involvement in Flood Risk Governance may improve community resilience to groundwater flooding in particular. This paper focuses on a case study of a flood risk management project in the United Kingdom. Buckinghamshire Council is leading Project Groundwater, which is one of 25 significant initiatives sponsored by England's Department for Environment, Food and Rural Affairs (DEFRA) Flood and Coastal Resilience Innovation Programme. DEFRA awarded Buckinghamshire Council and other councils 150 million to collaborate with communities and implement innovative methods to increase resilience to groundwater flooding. Based on a literature review, this paper proposes a new paradigm for effective community engagement in Flood Risk Governance (FRG). This study contends that effective community participation can have an impact on various resilience capacities identified in the literature, including social capital, institutional capital, physical capital, natural capital, human capital, and economic capital. In the case of social capital, for example, successful community engagement can influence social capital through the process of social learning as well as through developing social networks and trust values, which are vital in influencing communities' capacity to resist, absorb, recover, and adapt. The study examines community engagement in Project Groundwater using surveys with local communities and documentary analysis to test this notion. The outcomes of the study will inform community involvement activities in Project Groundwater and may shape DEFRA policies and guidelines for community engagement in FRM.Keywords: flood risk governance, community, resilience, groundwater flooding
Procedia PDF Downloads 7065 Rheological Properties of Thermoresponsive Poly(N-Vinylcaprolactam)-g-Collagen Hydrogel
Authors: Serap Durkut, A. Eser Elcin, Y. Murat Elcin
Abstract:
Stimuli-sensitive polymeric hydrogels have received extensive attention in the biomedical field due to their sensitivity to physical and chemical stimuli (temperature, pH, ionic strength, light, etc.). This study describes the rheological properties of a novel thermoresponsive poly(N-vinylcaprolactam)-g-collagen hydrogel. In the study, we first synthesized a facile and novel synthetic carboxyl group-terminated thermo-responsive poly(N-vinylcaprolactam)-COOH (PNVCL-COOH) via free radical polymerization. Further, this compound was effectively grafted with native collagen, by utilizing the covalent bond between the carboxylic acid groups at the end of the chains and amine groups of the collagen using cross-linking agent (EDC/NHS), forming PNVCL-g-Col. Newly-formed hybrid hydrogel displayed novel properties, such as increased mechanical strength and thermoresponsive characteristics. PNVCL-g-Col showed low critical solution temperature (LCST) at 38ºC, which is very close to the body temperature. Rheological studies determine structural–mechanical properties of the materials and serve as a valuable tool for characterizing. The rheological properties of hydrogels are described in terms of two dynamic mechanical properties: the elastic modulus G′ (also known as dynamic rigidity) representing the reversible stored energy of the system, and the viscous modulus G″, representing the irreversible energy loss. In order to characterize the PNVCL-g-Col, the rheological properties were measured in terms of the function of temperature and time during phase transition. Below the LCST, favorable interactions allowed the dissolution of the polymer in water via hydrogen bonding. At temperatures above the LCST, PNVCL molecules within PNVCL-g-Col aggregated due to dehydration, causing the hydrogel structure to become dense. When the temperature reached ~36ºC, both the G′ and G″ values crossed over. This indicates that PNVCL-g-Col underwent a sol-gel transition, forming an elastic network. Following temperature plateau at 38ºC, near human body temperature the sample displayed stable elastic network characteristics. The G′ and G″ values of the PNVCL-g-Col solutions sharply increased at 6-9 minute interval, due to rapid transformation into gel-like state and formation of elastic networks. Copolymerization with collagen leads to an increase in G′, as collagen structure contains a flexible polymer chain, which bestows its elastic properties. Elasticity of the proposed structure correlates with the number of intermolecular cross-links in the hydrogel network, increasing viscosity. However, at 8 minutes, G′ and G″ values sharply decreased for pure collagen solutions due to the decomposition of the elastic and viscose network. Complex viscosity is related to the mechanical performance and resistance opposing deformation of the hydrogel. Complex viscosity of PNVCL-g-Col hydrogel was drastically changed with temperature and the mechanical performance of PNVCL-g-Col hydrogel network increased, exhibiting lesser deformation. Rheological assessment of the novel thermo-responsive PNVCL-g-Col hydrogel, exhibited that the network has stronger mechanical properties due to both permanent stable covalent bonds and physical interactions, such as hydrogen- and hydrophobic bonds depending on temperature.Keywords: poly(N-vinylcaprolactam)-g-collagen, thermoresponsive polymer, rheology, elastic modulus, stimuli-sensitive
Procedia PDF Downloads 24364 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks
Authors: Nicholas Aerne, John P. Parmigiani
Abstract:
There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply
Procedia PDF Downloads 20463 Zinc Oxide Varistor Performance: A 3D Network Model
Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic
Abstract:
ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide
Procedia PDF Downloads 28162 Automatic Content Curation of Visual Heritage
Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz
Abstract:
Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research
Procedia PDF Downloads 18461 Red Dawn in the Desert: A World-Systems Analysis of the Maritime Silk Road Initiative
Authors: Toufic Sarieddine
Abstract:
The current debate on the hegemonic impact of China’s Belt and Road Initiative (BRI) is of two opposing strands: Resilient and absolute US hegemony on the one hand and various models of multipolar hegemony such as bifurcation on the other. Bifurcation theories illustrate an unprecedented division of hegemonic functions between China and the US, whereby Beijing becomes the world’s economic hegemon, leaving Washington the world’s military hegemon and security guarantor. While consensus points to China being the main driver of unipolarity’s rupturing, the debate among bifurcationists is on the location of the first rupture. In this regard, the Middle East and North Africa (MENA) region has seen increasing Chinese foreign direct investment in recent years while that to other regions has declined, ranking it second in 2018 as part of the financing for the Maritime Silk Road Initiative (MSRI). China has also become the top trade partner of 11 states in the MENA region, as well as its top source of machine imports, surpassing the US and achieving an overall trade surplus almost double that of Washington’s. These are among other features outlined in world-systems analysis (WSA) literature which correspond with the emergence of a new hegemon. WSA is further utilized to gauge other facets of China’s increasing involvement in MENA and assess whether bifurcation is unfolding therein. These features of hegemony include the adoption of China’s modi operandi, economic dominance in production, trade, and finance, military capacity, cultural hegemony in ideology, education, and language, and the promotion of a general interest around which to rally potential peripheries (MENA states in this case). China’s modi operandi has seen some adoption with regards to support against the United Nations Convention on the Law of the Sea, oil bonds denominated in the yuan, and financial institutions such as the Shanghai Gold Exchange enjoying increasing Arab patronage. However, recent elections in Qatar, as well as liberal reforms in Saudi Arabia, demonstrate Washington’s stronger normative influence. Meanwhile, Washington’s economic dominance is challenged by China’s sizable machine exports, increasing overall imports, and widening trade surplus, but retains some clout via dominant arms and transport exports, as well as free-trade deals across the region. Militarily, Washington bests Beijing’s arms exports, has a dominant and well-established presence in the region, and successfully blocked Beijing’s attempt to penetrate through the UAE. Culturally, Beijing enjoys higher favorability in Arab public opinion, and its broadcast networks have found some resonance with Arab audiences. In education, the West remains MENA students’ preferred destination. Further, while Mandarin has become increasingly available in schools across MENA, its usage and availability still lag far behind English. Finally, Beijing’s general interest in infrastructure provision and prioritizing economic development over social justice and democracy provides an avenue for increased incorporation between Beijing and the MENA region. The overall analysis shows solid progress towards bifurcation in MENA.Keywords: belt and road initiative, hegemony, Middle East and North Africa, world-systems analysis
Procedia PDF Downloads 10860 Drones, Rebels and Bombs: Explaining the Role of Private Security and Expertise in a Post-piratical Indian Ocean
Authors: Jessica Kate Simonds
Abstract:
The last successful hijacking perpetrated by Somali pirates in 2012 represented a critical turning point for the identity and brand of Indian Ocean (IO) insecurity, coined in this paper as the era of the post-piratical. This paper explores the broadening of the PMSC business model to account and contribute to the design of a new IO security environment that prioritises foreign and insurgency drone activity and Houthi rebel operations as the main threat to merchant shipping in the post-2012 era. This study is situated within a longer history of analysing maritime insecurity and also contributes a bespoke conceptual framework that understands the sea as a space that is produced and reproduced relative to existing and emerging threats to merchant shipping based on bespoke models of information sharing and intelligence acquisition. This paper also makes a prominent empirical contribution by drawing on a post-positivist methodology, data drawn from original semi-structured interviews with senior maritime insurers and active merchant seafarers that is triangulated with industry-produced guidance such as the BMP series as primary data sources. Each set is analysed through qualitative discourse and content analysis and supported by the quantitative data sets provided by the IMB Piracy Reporting center and intelligence networks. This analysis reveals that mechanisms such as the IGP&I Maritime Security Committee and intelligence divisions of PMSC’s have driven the exchanges of knowledge between land and sea and thus the reproduction of the maritime security environment through new regulations and guidance to account dones, rebels and bombs as the key challenges in the IO, beyond piracy. A contribution of this paper is the argument that experts who may not be in the highest-profile jobs are the architects of maritime insecurity based on their detailed knowledge and connections to vessels in transit. This paper shares the original insights of those who have served in critical decision making spaces to demonstrate that the development and refinement of industry produced deterrence guidance that has been accredited to the mitigation of piracy, have shaped new editions such as BMP 5 that now serve to frame a new security environment that prioritises the mitigation of risks from drones and WBEID’s from both state and insurgency risk groups. By highlighting the experiences and perspectives of key players on both land and at sea, the key finding of this paper is outlining that as pirates experienced a financial boom by profiteering from their bespoke business model during the peak of successful hijackings, the private security market encountered a similar level of financial success and guaranteed risk environment in which to prospect business. Thus, the reproduction of the Indian Ocean as a maritime security environment reflects a new found purpose for PMSC’s as part of the broader conglomerate of maritime insurers, regulators, shipowners and managers who continue to redirect the security consciousness and IO brand of insecurity.Keywords: maritime security, private security, risk intelligence, political geography, international relations, political economy, maritime law, security studies
Procedia PDF Downloads 18459 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning
Authors: Pinzhe Zhao
Abstract:
This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity
Procedia PDF Downloads 2058 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques
Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu
Abstract:
Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare
Procedia PDF Downloads 6457 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure
Authors: Sara Saboonian, Pierre Filion
Abstract:
The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.Keywords: ecosystem services, green infrastructure, intensification, planning
Procedia PDF Downloads 35556 The Istrian Istrovenetian-Croatian Bilingual Corpus
Authors: Nada Poropat Jeletic, Gordana Hrzica
Abstract:
Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology
Procedia PDF Downloads 14555 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 17754 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 15053 A Single Cell Omics Experiments as Tool for Benchmarking Bioinformatics Oncology Data Analysis Tools
Authors: Maddalena Arigoni, Maria Luisa Ratto, Raffaele A. Calogero, Luca Alessandri
Abstract:
The presence of tumor heterogeneity, where distinct cancer cells exhibit diverse morphological and phenotypic profiles, including gene expression, metabolism, and proliferation, poses challenges for molecular prognostic markers and patient classification for targeted therapies. Understanding the causes and progression of cancer requires research efforts aimed at characterizing heterogeneity, which can be facilitated by evolving single-cell sequencing technologies. However, analyzing single-cell data necessitates computational methods that often lack objective validation. Therefore, the establishment of benchmarking datasets is necessary to provide a controlled environment for validating bioinformatics tools in the field of single-cell oncology. Benchmarking bioinformatics tools for single-cell experiments can be costly due to the high expense involved. Therefore, datasets used for benchmarking are typically sourced from publicly available experiments, which often lack a comprehensive cell annotation. This limitation can affect the accuracy and effectiveness of such experiments as benchmarking tools. To address this issue, we introduce omics benchmark experiments designed to evaluate bioinformatics tools to depict the heterogeneity in single-cell tumor experiments. We conducted single-cell RNA sequencing on six lung cancer tumor cell lines that display resistant clones upon treatment of EGFR mutated tumors and are characterized by driver genes, namely ROS1, ALK, HER2, MET, KRAS, and BRAF. These driver genes are associated with downstream networks controlled by EGFR mutations, such as JAK-STAT, PI3K-AKT-mTOR, and MEK-ERK. The experiment also featured an EGFR-mutated cell line. Using 10XGenomics platform with cellplex technology, we analyzed the seven cell lines together with a pseudo-immunological microenvironment consisting of PBMC cells labeled with the Biolegend TotalSeq™-B Human Universal Cocktail (CITEseq). This technology allowed for independent labeling of each cell line and single-cell analysis of the pooled seven cell lines and the pseudo-microenvironment. The data generated from the aforementioned experiments are available as part of an online tool, which allows users to define cell heterogeneity and generates count tables as an output. The tool provides the cell line derivation for each cell and cell annotations for the pseudo-microenvironment based on CITEseq data by an experienced immunologist. Additionally, we created a range of pseudo-tumor tissues using different ratios of the aforementioned cells embedded in matrigel. These tissues were analyzed using 10XGenomics (FFPE samples) and Curio Bioscience (fresh frozen samples) platforms for spatial transcriptomics, further expanding the scope of our benchmark experiments. The benchmark experiments we conducted provide a unique opportunity to evaluate the performance of bioinformatics tools for detecting and characterizing tumor heterogeneity at the single-cell level. Overall, our experiments provide a controlled and standardized environment for assessing the accuracy and robustness of bioinformatics tools for studying tumor heterogeneity at the single-cell level, which can ultimately lead to more precise and effective cancer diagnosis and treatment.Keywords: single cell omics, benchmark, spatial transcriptomics, CITEseq
Procedia PDF Downloads 11752 The 5-HT1A Receptor Biased Agonists, NLX-101 and NLX-204, Elicit Rapid-Acting Antidepressant Activity in Rat Similar to Ketamine and via GABAergic Mechanisms
Authors: A. Newman-Tancredi, R. Depoortère, P. Gruca, E. Litwa, M. Lason, M. Papp
Abstract:
The N-methyl-D-aspartic acid (NMDA) receptor antagonist, ketamine, can elicit rapid-acting antidepressant (RAAD) effects in treatment-resistant patients, but it requires parenteral co-administration with a classical antidepressant under medical supervision. In addition, ketamine can also produce serious side effects that limit its long-term use, and there is much interest in identifying RAADs based on ketamine’s mechanism of action but with safer profiles. Ketamine elicits GABAergic interneuron inhibition, glutamatergic neuron stimulation, and, notably, activation of serotonin 5-HT1A receptors in the prefrontal cortex (PFC). Direct activation of the latter receptor subpopulation with selective ‘biased agonists’ may therefore be a promising strategy to identify novel RAADs and, consistent with this hypothesis, the prototypical cortical biased agonist, NLX-101, exhibited robust RAAD-like activity in the chronic mild stress model of depression (CMS). The present study compared the effects of a novel, selective 5-HT1A receptor-biased agonist, NLX-204, with those of ketamine and NLX-101. Materials and methods: CMS procedure was conducted on Wistar rats; drugs were administered either intraperitoneally (i.p.) or by bilateral intracortical microinjection. Ketamine: 10 mg/kg i.p. or 10 µg/side in PFC; NLX-204 and NLX-101: 0.08 and 0.16 mg/kg i.p. or 16 µg/side in PFC. In addition, interaction studies were carried out with systemic NLX-204 or NLX-101 (each at 0.16 mg/kg i.p.) in combination with intracortical WAY-100635 (selective 5-HT1A receptor antagonist; 2 µg/side) or muscimol (GABA-A receptor agonist, 12.5 ng/side). Anhedonia was assessed by CMS-induced decrease in sucrose solution consumption; anxiety-like behavior was assessed using the Elevated Plus Maze (EPM), and cognitive impairment was assessed by the Novel Object Recognition (NOR) test. Results: A single administration of NLX-204 was sufficient to reverse the CMS-induced deficit in sucrose consumption, similarly to ketamine and NLX-101. NLX-204 also reduced CMS-induced anxiety in the EPM and abolished CMS-induced NOR deficits. These effects were maintained (EPM and NOR) or enhanced (sucrose consumption) over a subsequent 2-week period of treatment. The anti-anhedonic response of the drugs was also maintained for several weeks Following treatment discontinuation, suggesting that they had sustained effects on neuronal networks. A single PFC administration of NLX-204 reversed deficient sucrose consumption, similarly to ketamine and NLX-101. Moreover, the anti-anhedonic activities of systemic NLX-204 and NLX 101 were abolished by coadministration with intracortical WAY-100635 or muscimol. Conclusions: (i) The antidepressant-like activity of NLX-204 in the rat CMS model was as rapid as that of ketamine or NLX-101, supporting targeting cortical 5-HT1A receptors with selective, biased agonists to achieve RAAD effects. (ii)The anti-anhedonic activity of systemic NLX-204 was mimicked by local administration of the compound in the PFC, confirming the involvement of cortical circuits in its RAAD-like effects. (iii) Notably, the effects of systemic NLX-204 and NLX-101 were abolished by PFC administration of muscimol, indicating that they act by (indirectly) eliciting a reduction in cortical GABAergic neurotransmission. This is consistent with ketamine’s mechanism of action and suggests that there are converging NMDA and 5-HT1A receptor signaling cascades in PFC underlying the RAAD-like activities of ketamine and NLX-204. Acknowledgements: The study was financially supported by NCN grant no. 2019/35/B/NZ7/00787.Keywords: depression, ketamine, serotonin, 5-HT1A receptor, chronic mild stress
Procedia PDF Downloads 11251 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering
Authors: Olia Kanevskaia
Abstract:
Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.Keywords: private order, standardization, standard-setting organizations, transnational law
Procedia PDF Downloads 16350 A Longitudinal Exploration into Computer-Mediated Communication Use (CMC) and Relationship Change between 2005-2018
Authors: Laurie Dempsey
Abstract:
Relationships are considered to be beneficial for emotional wellbeing, happiness and physical health. However, they are also complicated: individuals engage in a multitude of complex and volatile relationships during their lifetime, where the change to or ending of these dynamics can be deeply disruptive. As the internet is further integrated into everyday life and relationships are increasingly mediated, Media Studies’ and Sociology’s research interests intersect and converge. This study longitudinally explores how relationship change over time corresponds with the developing UK technological landscape between 2005-2018. Since the early 2000s, the use of computer-mediated communication (CMC) in the UK has dramatically reshaped interaction. Its use has compelled individuals to renegotiate how they consider their relationships: some argue it has allowed for vast networks to be accumulated and strengthened; others contend that it has eradicated the core values and norms associated with communication, damaging relationships. This research collaborated with UK media regulator Ofcom, utilising the longitudinal dataset from their Adult Media Lives study to explore how relationships and CMC use developed over time. This is a unique qualitative dataset covering 2005-2018, where the same 18 participants partook in annual in-home filmed depth interviews. The interviews’ raw video footage was examined year-on-year to consider how the same people changed their reported behaviour and outlooks towards their relationships, and how this coincided with CMC featuring more prominently in their everyday lives. Each interview was transcribed, thematically analysed and coded using NVivo 11 software. This study allowed for a comprehensive exploration into these individuals’ changing relationships over time, as participants grew older, experienced marriages or divorces, conceived and raised children, or lost loved ones. It found that as technology developed between 2005-2018, everyday CMC use was increasingly normalised and incorporated into relationship maintenance. It played a crucial role in altering relationship dynamics, even factoring in the breakdown of several ties. Three key relationships were identified as being shaped by CMC use: parent-child; extended family; and friendships. Over the years there were substantial instances of relationship conflict: for parents renegotiating their dynamic with their child as they tried to both restrict and encourage their child’s technology use; for estranged family members ‘forced’ together in the online sphere; and for friendships compelled to publicly display their relationship on social media, for fear of social exclusion. However, it was also evident that CMC acted as a crucial lifeline for these participants, providing opportunities to strengthen and maintain their bonds via previously unachievable means, both over time and distance. A longitudinal study of this length and nature utilising the same participants does not currently exist, thus provides crucial insight into how and why relationship dynamics alter over time. This unique and topical piece of research draws together Sociology and Media Studies, illustrating how the UK’s changing technological landscape can reshape one of the most basic human compulsions. This collaboration with Ofcom allows for insight that can be utilised in both academia and policymaking alike, making this research relevant and impactful across a range of academic fields and industries.Keywords: computer mediated communication, longitudinal research, personal relationships, qualitative data
Procedia PDF Downloads 12149 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 5848 Barriers and Enablers to Climate and Health Adaptation Planning in Small Urban Areas in the Great Lakes Region
Authors: Elena Cangelosi, Wayne Beyea
Abstract:
This research expands the resilience planning literature by exploring the barriers and enablers to climate and health adaptation planning for small urban, coastal Great Lakes communities. With funding from the United States Centers for Disease Control and Prevention (CDC) Climate Ready City and States Initiative, this research took place during a 3-year pilot intervention project which integrates urban planning and public health. The project used the CDC’s Building Resilience Against Climate Effects (BRACE) framework to prevent or reduce the human health impacts from climate change in Marquette County, Michigan. Using a deliberation with the analysis planning process, interviews, focus groups, and community meetings with over 25 stakeholder groups and over 100 participants identified the area’s climate-related health concerns and adaptation interventions to address those concerns. Marquette County, on the shores of Lake Superior, the largest of the Great Lakes, was selected for the project based on their existing adaptive capacity and proactive approach to climate adaptation planning. With Marquette County as the context, this study fills a gap in the adaptation literature, which currently heavily emphasizes large-urban or agriculturally-based rural areas, and largely neglects small urban areas. This research builds on the qualitative case-study, survey, and interview approach established by previous researchers on contextual barriers and enablers for adaptation planning. This research uses a case study approach, including surveys and interviews of public officials, to identify the barriers and enablers for climate and health adaptation planning for small-urban areas within a large, non-agricultural, Great Lakes county. The researchers hypothesize that the barriers and enablers will, in some cases, overlap those found in other contexts, but in many cases, will be unique to a rural setting. The study reveals that funding, staff capacity, and communication across a large, rural geography act as the main barriers, while strong networks and collaboration, interested leaders, and community interest through a strong human-land connection act as the primary enablers. Challenges unique to rural areas are revealed, including weak opportunities for grant funding, large geographical distances, communication challenges with an aging and remote population, and the out-migration of education residents. Enablers that may be unique to rural contexts include strong collaborative relationships across jurisdictions for regional work and strong connections between residents and the land. As the factors that enable and prevent climate change planning are highly contextual, understanding, and appropriately addressing the unique factors at play for small-urban communities is key for effective planning in those areas. By identifying and addressing the barriers and enablers to climate and health adaptation planning for small-urban, coastal areas, this study can help Great Lakes communities appropriately build resilience to the adverse impacts of climate change. In addition, this research expands the breadth of research and understanding of the challenges and opportunities planners confront in the face of climate change.Keywords: climate adaptation and resilience, climate change adaptation, climate change and urban resilience, governance and urban resilience
Procedia PDF Downloads 12047 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia
Authors: Andrew D. Henshaw
Abstract:
The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism
Procedia PDF Downloads 31346 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus
Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya
Abstract:
Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.Keywords: driverless vehicle, path planning, sensor fusion, state estimate
Procedia PDF Downloads 14445 Top-Down, Middle-Out, Bottom-Up: A Design Approach to Transforming Prison
Authors: Roland F. Karthaus, Rachel S. O'Brien
Abstract:
Over the past decade, the authors have undertaken applied research aimed at enabling transformation within the prison service to improve conditions and outcomes for those living, working and visiting in prisons in the UK and the communities they serve. The research has taken place against a context of reducing resources and public discontent at increasing levels of violence, deteriorating conditions and persistently high levels of re-offending. Top-down governmental policies have mainly been ineffectual and in some cases counter-productive. The prison service is characterised by hierarchical organisation, and the research has applied design thinking at multiple levels to challenge and precipitate change: top-down, middle-out and bottom-up. The research employs three distinct but related approaches, system design (top-down): working at the national policy level to analyse the changing policy context, identifying opportunities and challenges; engaging with the Ministry of Justice commissioners and sector organisations to facilitate debate, introducing new evidence and provoking creative thinking, place-based design (middle-out): working with individual prison establishments as pilots to illustrate and test the potential for local empowerment, creative change, and improved architecture within place-specific contexts and organisational hierarchies, everyday design (bottom-up): working with individuals in the system to explore the potential for localised, significant, demonstrator changes; including collaborative design, capacity building and empowerment in skills, employment, communication, training, and other activities. The research spans a series of projects, through which the methodological approach has developed responsively. The projects include a place-based model for the re-purposing of Ministry of Justice land assets for the purposes of rehabilitation; an evidence-based guide to improve prison design for health and well-being; capacity-based employment, skills and self-build project as a template for future open prisons. The overarching research has enabled knowledge to be developed and disseminated through policy and academic networks. Whilst the research remains live and continuing; key findings are emerging as a basis for a new methodological approach to effecting change in the UK prison service. An interdisciplinary approach is necessary to overcome the barriers between distinct areas of the prison service. Sometimes referred to as total environments, prisons encompass entire social and physical environments which themselves are orchestrated by institutional arms of government, resulting in complex systems that cannot be meaningfully engaged through narrow disciplinary lenses. A scalar approach is necessary to connect strategic policies with individual experiences and potential, through the medium of individual prison establishments, operating as discrete entities within the system. A reflexive process is necessary to connect research with action in a responsive mode, learning to adapt as the system itself is changing. The role of individuals in the system, their latent knowledge and experience and their ability to engage and become agents of change are essential. Whilst the specific characteristics of the UK prison system are unique, the approach is internationally applicable.Keywords: architecture, design, policy, prison, system, transformation
Procedia PDF Downloads 13344 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education
Authors: Liudmyla Vesper
Abstract:
The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem
Procedia PDF Downloads 61