Search results for: site scenery architecture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4119

Search results for: site scenery architecture

39 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)

Authors: Salvatore Luongo, Carlo Luongo

Abstract:

This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilities

Keywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification

Procedia PDF Downloads 282
38 Carbon Nanotube-Based Catalyst Modification to Improve Proton Exchange Membrane Fuel Cell Interlayer Interactions

Authors: Ling Ai, Ziyu Zhao, Zeyu Zhou, Xiaochen Yang, Heng Zhai, Stuart Holmes

Abstract:

Optimizing the catalyst layer structure is crucial for enhancing the performance of proton exchange membrane fuel cells (PEMFCs) with low Platinum (Pt) loading. Current works focused on the utilization, durability, and site activity of Pt particles on support, and performance enhancement has been achieved by loading Pt onto porous support with different morphology, such as graphene, carbon fiber, and carbon black. Some schemes have also incorporated cost considerations to achieve lower Pt loading. However, the design of the catalyst layer (CL) structure in the membrane electrode assembly (MEA) must consider the interactions between the layers. Addressing the crucial aspects of water management, low contact resistance, and the establishment of effective three-phase boundary for MEA, multi-walled carbon nanotubes (MWCNTs) are promising CL support due to their intrinsically high hydrophobicity, high axial electrical conductivity, and potential for ordered alignment. However, the drawbacks of MWCNTs, such as strong agglomeration, wall surface chemical inertness, and unopened ends, are unfavorable for Pt nanoparticle loading, which is detrimental to MEA processing and leads to inhomogeneous CL surfaces. This further deteriorates the utilization of Pt and increases the contact resistance. Robust chemical oxidation or nitrogen doping can introduce polar functional groups onto the surface of MWCNTs, facilitating the creation of open tube ends and inducing defects in tube walls. This improves dispersibility and load capacity but reduces length and conductivity. Consequently, a trade-off exists between maintaining the intrinsic properties and the degree of functionalization of MWCNTs. In this work, MWCNTs were modified based on the operational requirements of the MEA from the viewpoint of interlayer interactions, including the search for the optimal degree of oxidation, N-doping, and micro-arrangement. MWCNT were functionalized by oxidizing, N-doping, as well as micro-alignment to achieve lower contact resistance between CL and proton exchange membrane (PEM), better hydrophobicity, and enhanced performance. Furthermore, this work expects to construct a more continuously distributed three-phase boundary by aligning MWCNT to form a locally ordered structure, which is essential for the efficient utilization of Pt active sites. Different from other chemical oxidation schemes that used HNO3:H2SO4 (1:3) mixed acid to strongly oxidize MWCNT, this scheme adopted pure HNO3 to partially oxidize MWCNT at a lower reflux temperature (80 ℃) and a shorter treatment time (0 to 10 h) to preserve the morphology and intrinsic conductivity of MWCNT. The maximum power density of 979.81 mw cm-2 was achieved by Pt loading on 6h MWCNT oxidation time (Pt-MWCNT6h). This represented a 59.53% improvement over the commercial Pt/C catalyst of 614.17 (mw cm-2). In addition, due to the stronger electrical conductivity, the charge transfer resistance of Pt-MWCNT6h in the electrochemical impedance spectroscopy (EIS) test was 0.09 Ohm cm-2, which was 48.86% lower than that of Pt/C. This study will discuss the developed catalysts and their efficacy in a working fuel cell system. This research will validate the impact of low-functionalization modification of MWCNTs on the performance of PEMFC, which simplifies the preparation challenges of CL and contributing for the widespread commercial application of PEMFCs on a larger scale.

Keywords: carbon nanotubes, electrocatalyst, membrane electrode assembly, proton exchange membrane fuel cell

Procedia PDF Downloads 68
37 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 135
36 Genotoxic Effect of Tricyclic Antidepressant Drug “Clomipramine Hydrochloride’ on Somatic and Germ Cells of Male Mice

Authors: Samia A. El-Fiky, Fouad A. Abou-Zaid, Ibrahim M. Farag, Naira M. El-Fiky

Abstract:

Clomipramine hydrochloride is one of the most used tricyclic antidepressant drug in Egypt. This drug contains in its chemical structure on two benzene rings. Benzene is considered to be toxic and clastogenic agent. So, the present study was designed to assess the genotoxic effect of Clomipramine hydrochloride on somatic and germ cells in mice. Three dose levels 0.195 (Low), 0.26 (Medium), and 0.65 (High) mg/kg.b.wt. were used. Seven groups of male mice were utilized in this work. The first group was employed as a control. In the remaining six groups, each of the above doses was orally administrated for two groups, one of them was treated for 5 days and the other group was given the same dose for 30 days. At the end of experiments, the animals were sacrificed for cytogenetic and sperm examination as well as histopathological investigations by using hematoxylin and eosin stains (H and E stains) and electron microscope. Concerning the sperm studies, these studies were confined to 5 days treatment with different dose levels. Moreover, the ultrastructural investigation by electron microscope was restricted to 30 days treatment with drug doses. The results of the dose dependent effect of Clomipramine showed that the treatment with three different doses induced increases of frequencies of chromosome aberrations in bone marrow and spermatocyte cells as compared to control. In addition, mitotic and meiotic activities of somatic and germ cells were declined. The treatments with medium or high doses were more effective for inducing significant increases of chromosome aberrations and significant decreases of cell divisions than treatment with low dose. The effect of high dose was more pronounced for causing such genetic deleterious in respect to effect of medium dose. Moreover, the results of the time dependent effect of Clomipramine observed that the treatment with different dose levels for 30 days led to significant increases of genetic aberrations than treatment for 5 days. Sperm examinations revealed that the treatment with Clomipramine at different dose levels caused significant increase of sperm shape abnormalities and significant decrease in sperm count as compared to control. The adverse effects on sperm shape and count were more obviousness by using the treatments with medium or high doses than those found in treatment with low dose. The group of mice treated with high dose had the highest rate of sperm shape abnormalities and the lowest proportion of sperm count as compared to mice received medium dose. In histopathological investigation, hematoxylin and eosin stains showed that, the using of low dose of Clomipramine for 5 or 30 days caused a little pathological changes in liver tissue. However, using medium and high doses for 5 or 30 days induced severe damages than that observed in mice treated with low dose. The treatment with high dose for 30 days gave the worst results of pathological changes in hepatic cells. Moreover, ultrastructure examination revealed, the mice treated with low dose of Clomipramine had little differences in liver histological architecture as compared to control group. These differences were confined to cytoplasmic inclusions. Whereas, prominent pathological changes in nuclei as well as dilated of rough Endoplasmic Reticulum (rER) were observed in mice treated with medium or high doses of Clomipramine drug. In conclusion, the present study adds evidence that treatments with medium or high doses of Clomipramine have genotoxic effects on somatic and germ cells of mice, as unwanted side effects. However, the using of low dose (especially for short time, 5 days) can be utilized as a therapeutic dose, where it caused relatively similar proportions of genetic, sperm, and histopathological changes as those found in normal control.

Keywords: chromosome aberrations, clomipramine, mice, histopathology, sperm abnormalities

Procedia PDF Downloads 519
35 Non-Mammalian Pattern Recognition Receptor from Rock Bream (Oplegnathus fasciatus): Genomic Characterization and Transcriptional Profile upon Bacterial and Viral Inductions

Authors: Thanthrige Thiunuwan Priyathilaka, Don Anushka Sandaruwan Elvitigala, Bong-Soo Lim, Hyung-Bok Jeong, Jehee Lee

Abstract:

Toll like receptors (TLRs) are a phylogeneticaly conserved family of pattern recognition receptors, which participates in the host immune responses against various pathogens and pathogen derived mitogen. TLR21, a non-mammalian type, is almost restricted to the fish species even though those can be identified rarely in avians and amphibians. Herein, this study was carried out to identify and characterize TLR21 from rock bream (Oplegnathus fasciatus) designated as RbTLR21, at transcriptional and genomic level. In this study, the full length cDNA and genomic sequence of RbTLR21 was identified using previously constructed cDNA sequence database and BAC library, respectively. Identified RbTLR21 sequence was characterized using several bioinformatics tools. The quantitative real time PCR (qPCR) experiment was conducted to determine tissue specific expressional distribution of RbTLR21. Further, transcriptional modulation of RbTLR21 upon the stimulation with Streptococcus iniae (S. iniae), rock bream iridovirus (RBIV) and Edwardsiella tarda (E. tarda) was analyzed in spleen tissues. The complete coding sequence of RbTLR21 was 2919 bp in length which can encode a protein consisting of 973 amino acid residues with molecular mass of 112 kDa and theoretical isoelectric point of 8.6. The anticipated protein sequence resembled a typical TLR domain architecture including C-terminal ectodomain with 16 leucine rich repeats, a transmembrane domain, cytoplasmic TIR domain and signal peptide with 23 amino acid residues. Moreover, protein folding pattern prediction of RbTLR21 exhibited well-structured and folded ectodomain, transmembrane domain and cytoplasmc TIR domain. According to the pair wise sequence analysis data, RbTLR21 showed closest homology with orange-spotted grouper (Epinephelus coioides) TLR21with 76.9% amino acid identity. Furthermore, our phylogenetic analysis revealed that RbTLR21 shows a close evolutionary relationship with its ortholog from Danio rerio. Genomic structure of RbTLR21 consisted of single exon similar to its ortholog of zebra fish. Sevaral putative transcription factor binding sites were also identified in 5ʹ flanking region of RbTLR21. The RBTLR 21 was ubiquitously expressed in all the tissues we tested. Relatively, high expression levels were found in spleen, liver and blood tissues. Upon induction with rock bream iridovirus, RbTLR21 expression was upregulated at the early phase of post induction period even though RbTLR21 expression level was fluctuated at the latter phase of post induction period. Post Edwardsiella tarda injection, RbTLR transcripts were upregulated throughout the experiment. Similarly, Streptococcus iniae induction exhibited significant upregulations of RbTLR21 mRNA expression in the spleen tissues. Collectively, our findings suggest that RbTLR21 is indeed a homolog of TLR21 family members and RbTLR21 may be involved in host immune responses against bacterial and DNA viral infections.

Keywords: rock bream, toll like receptor 21 (TLR21), pattern recognition receptor, genomic characterization

Procedia PDF Downloads 536
34 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance

Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi

Abstract:

This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.

Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building

Procedia PDF Downloads 28
33 Preparedness and Control of Mosquito-Borne Diseases: Experiences from Northwestern Italy

Authors: Federica Verna, Alessandra Pautasso, Maria Caramelli, Cristiana Maurella, Walter Mignone, Cristina Casalone

Abstract:

Mosquito-Borne Diseases (MBDs) are dangerously increasing in prevalence, geographical distribution and severity, representing an emerging threat for both humans and animals. Interaction between multiple disciplines is needed for an effective early warning, surveillance and control of MBDs, according to the One Health concept. This work reports the integrated surveillance system enforced by IZSPLV in Piedmont, Liguria and Valle d’Aosta regions (Northwestern Italy) in order to control MDBs spread. Veterinary services and local human health authority are involved in an information network, to connect the surveillance of human clinical cases with entomological surveillance and veterinary monitoring in order to implement control measures in case of outbreak. A systematic entomological surveillance is carried out during the vector season using mosquitoes traps located in sites selected according to risk factors. Collected mosquitoes are counted, identified to species level by morphological standard classification keys and pooled by collection site, date and species with a maximum of 100 individuals. Pools are analyzed, after RNA extraction, by Real Time RT-PCR distinctive for West Nile Virus (WNV) Lineage 1 and Lineage 2, Real Time RT-PCR USUTU virus (USUV) and a traditional flavivirus End-point RT-PCR. Positive pools are sequenced and the related sequences employed to perform a basic local alignment search tool (BLAST) in the GenBank library. Positive samples are sent to the National Reference Centre for Animal Exotic Diseases (CESME, Teramo) for confirmation. With particular reference to WNV, after the confirmation, as provided by national legislation, control measures involving both local veterinary and human health services are activated: equine sera are randomly sampled within a 4 km radius from the positive collection sites and tested with ELISA kit and WNV NAT screening of blood donors is introduced. This surveillance network allowed to detect since 2011 USUV circulation in this area of Italy. WNV was detected in Piedmont and Liguria for the first time in 2014 in mosquitoes. During the 2015 vector season, we observed the expansion of its activity in Piedmont. The virus was detected in almost all Provinces both in mosquitoes (6 pools) and animals (19 equine sera, 4 birds). No blood bag tested resulted infected. The first neuroinvasive human case occurred too. Competent authorities should be aware of a potentially increased risk of MBDs activity during the 2016 vector season. This work shows that this surveillance network allowed to early detect the presence of MBDs in humans and animals, and provided useful information to public authorities, in order to apply control measures. Finally, an additional value of our diagnostic protocol is the ability to detect all viruses belonging to the Flaviviridae family, considering the emergence caused by other Flaviviruses in humans such as the recent Zika virus infection in South America. Italy has climatic and environmental features conducive to Zika virus transmission, the competent vector and many travellers from Brazil reported every year.

Keywords: integrated surveillance, mosquito borne disease, West Nile virus, Zika virus

Procedia PDF Downloads 361
32 Northern Nigeria Vaccine Direct Delivery System

Authors: Evelyn Castle, Adam Thompson

Abstract:

Background: In 2013, the Kano State Primary Health Care Management Board redesigned its Routine immunization supply chain from diffused pull to direct delivery push. It addressed issues around stockouts and reduced time spent by health facility staff collecting, and reporting on vaccine usage. The health care board sought the help of a 3PL for twice-monthly deliveries from its cold store to 484 facilities across 44 local governments. eHA’s Health Delivery Systems group formed a 3PL to serve 326 of these new facilities in partnership with the State. We focused on designing and implementing a technology system throughout. Basic methodologies: GIS Mapping: - Planning the delivery of vaccines to hundreds of health facilities requires detailed route planning for delivery vehicles. Mapping the road networks across Kano and Bauchi with a custom routing tool provided information for the optimization of deliveries. Reducing the number of kilometers driven each round by 20%, - reducing cost and delivery time. Direct Delivery Information System: - Vaccine Direct Deliveries are facilitated through pre-round planning (driven by health facility database, extensive GIS, and inventory workflow rules), manager and driver control panel customizing delivery routines and reporting, progress dashboard, schedules/routes, packing lists, delivery reports, and driver data collection applications. Move: Last Mile Logistics Management System: - MOVE has improved vaccine supply information management to be timely, accurate and actionable. Provides stock management workflow support, alerts management for cold chain exceptions/stock outs, and on-device analytics for health and supply chain staff. Software was built to be offline-first with user-validated interface and experience. Deployed to hundreds of vaccine storage site the improved information tools helps facilitate the process of system redesign and change management. Findings: - Stock-outs reduced from 90% to 33% - Redesigned current health systems and managing vaccine supply for 68% of Kano’s wards. - Near real time reporting and data availability to track stock. - Paperwork burdens of health staff have been dramatically reduced. - Medicine available when the community needs it. - Consistent vaccination dates for children under one to prevent polio, yellow fever, tetanus. - Higher immunization rates = Lower infection rates. - Hundreds of millions of Naira worth of vaccines successfully transported. - Fortnightly service to 326 facilities in 326 wards across 30 Local Government areas. - 6,031 cumulative deliveries. - Over 3.44 million doses transported. - Minimum travel distance covered in a round of delivery is 2000 kms & maximum of 6297 kms. - 153,409 kms travelled by 6 drivers. - 500 facilities in 326 wards. - Data captured and synchronized for the first time. - Data driven decision making now possible. Conclusion: eHA’s Vaccine Direct delivery has met challenges in Kano and Bauchi State and provided a reliable delivery service of vaccinations that ensure t health facilities can run vaccination clinics for children under one. eHA uses innovative technology that delivers vaccines from Northern Nigerian zonal stores straight to healthcare facilities. Helped healthcare workers spend less time managing supplies and more time delivering care, and will be rolled out nationally across Nigeria.

Keywords: direct delivery information system, health delivery system, GIS mapping, Northern Nigeria, vaccines

Procedia PDF Downloads 372
31 Point-of-Decision Design (PODD) to Support Healthy Behaviors in the College Campuses

Authors: Michelle Eichinger, Upali Nanda

Abstract:

Behavior choices during college years can establish the pattern of lifelong healthy living. Nearly 1/3rd of American college students are either overweight (25 < BMI < 30) or obese (BMI > 30). In addition, overweight/obesity contributes to depression, which is a rising epidemic among college students, affecting academic performance and college drop-out rates. Overweight and obesity result in an imbalance of energy consumption (diet) and energy expenditure (physical activity). Overweight/obesity is a significant contributor to heart disease, diabetes, stroke, physical disabilities and some cancers, which are the leading causes of death and disease in the US. There has been a significant increase in obesity and obesity-related disorders such as type 2 diabetes, hypertension, and dyslipidemia among people in their teens and 20s. Historically, the evidence-based interventions for obesity prevention focused on changing the health behavior at the individual level and aimed at increasing awareness and educating people about nutrition and physical activity. However, it became evident that the environmental context of where people live, work and learn was interdependent to healthy behavior change. As a result, a comprehensive approach was required to include altering the social and built environment to support healthy living. College campus provides opportunities to support lifestyle behavior and form a health-promoting culture based on some key point of decisions such as stairs/ elevator, walk/ bike/ car, high-caloric and fast foods/balanced and nutrient-rich foods etc. At each point of decision, design, can help/hinder the healthier choice. For example, stair well design and motivational signage support physical activity; grocery store/market proximity influence healthy eating etc. There is a need to collate the vast information that is in planning and public health domains on a range of successful point of decision prompts, and translate it into architectural guidelines that help define the edge condition for critical point of decision prompts. This research study aims to address healthy behaviors through the built environment with the questions, how can we make the healthy choice an easy choice through the design of critical point of decision prompts? Our hypothesis is that well-designed point of decision prompts in the built environment of college campuses can promote healthier choices by students, which can directly impact mental and physical health related to obesity. This presentation will introduce a combined health and architectural framework aimed to influence healthy behaviors through design applied for college campuses. The premise behind developing our concept, point-of-decision design (PODD), is healthy decision-making can be built into, or afforded by our physical environments. Using effective design intervention strategies at these 'points-of-decision' on college campuses to make the healthy decision the default decision can be instrumental in positively impacting health at the population level. With our model, we aim to advance health research by utilizing point-of-decision design to impact student health via core sectors of influences within college settings, such as campus facilities and transportation. We will demonstrate how these domains influence patterns/trends in healthy eating and active living behaviors among students. how these domains influence patterns/trends in healthy eating and active living behaviors among students.

Keywords: architecture and health promotion, college campus, design strategies, health in built environment

Procedia PDF Downloads 221
30 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 75
29 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques

Authors: Chandu Rathnayake, Isuri Anuradha

Abstract:

Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.

Keywords: CNN, random forest, decision tree, machine learning, deep learning

Procedia PDF Downloads 72
28 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops

Authors: Simon Komesker, Achim Wagner, Martin Ruskowski

Abstract:

In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.

Keywords: holonic manufacturing system, modular production system, planning, and control, system structure

Procedia PDF Downloads 168
27 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India

Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony

Abstract:

The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.

Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns

Procedia PDF Downloads 208
26 Managing Crowds at Sports Mega Events: Examining the Impact of ‘Fan Parks’ at International Football Tournaments between 2002 and 2016

Authors: Joel Rookwood

Abstract:

Sports mega events have become increasingly significant in sporting, political and economic terms, with analysis often focusing on issues including resource expenditure, development, legacy and sustainability. Transnational tournaments can inspire interest from a variety of demographics, and the operational management of such events can involve contributions from a range of personnel. In addition to television audiences events also attract attending spectators, and in football contexts the temporary migration of fans from potentially rival nations and teams can present event organising committees and security personnel with various challenges in relation to crowd management. The behaviour, interaction and control of supporters has previously led to incidents of disorder and hooliganism, with damage to property as well as injuries and deaths proving significant consequences. The Heysel tragedy at the 1985 European Cup final in Brussels is a notable example, where 39 fans died following crowd disorder and mismanagement. Football disasters and disorder, particularly in the context of international competition, have inspired responses from police, law makers, event organisers, clubs and associations, including stadium improvements, legislative developments and crowd management practice to improve the effectiveness of spectator safety. The growth and internationalisation of fandom and developments in event management and tourism have seen various responses to the evolving challenges associated with hosting large numbers of visiting spectators at mega events. In football contexts ‘fan parks’ are a notable example. Since the first widespread introduction in European football competitions at the 2006 World Cup finals in Germany, these facilities have become a staple element of such mega events. This qualitative, longitudinal, multi-continent research draws on extensive semi-structured interview and observation data. As a frame of reference, this work considers football events staged before and after the development of fan parks. Research was undertaken at four World Cup finals (Japan 2002, Germany 2006, South Africa 2010 and Brazil 2014), four European Championships (Portugal 2004, Switzerland/Austria 2008, Poland/Ukraine 2012 and France 2016), four other confederation tournaments (Ghana 2008, Qatar 2011, USA 2011 and Chile 2015), and four European club finals (Istanbul 2005, Athens 2007, Rome 2009 and Basle 2016). This work found that these parks are typically temporarily erected, specifically located zones where supporters congregate together irrespective of allegiances to watch matches on large screens, and partake in other forms of organised on-site entertainment. Such facilities can also allow organisers to control the behaviour, confine the movement and monitor the alcohol consumption of supporters. This represents a notable shift in policy from previous football tournaments, when the widely assumed causal link between alcohol and hooliganism which frequently shaped legislative and police responses to disorder, also dissuaded some authorities from permitting fans to consume alcohol in and around stadia. It also reflects changing attitudes towards modern football fans. The work also found that in certain contexts supporters have increasingly engaged with such provision which impacts fan behaviour, but that this is relative to factors including location, facilities, management and security.

Keywords: event, facility, fan, management, park

Procedia PDF Downloads 311
25 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study

Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre

Abstract:

Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.

Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.

Procedia PDF Downloads 110
24 Socio-Sensorial Assessment of Nursing Homes in Singapore: Towards Integrated Enabling Design

Authors: Zdravko Trivic, John Chye Fung, Ruzica Bozovic-Stamenovic

Abstract:

Within the context of rapidly ageing population in Singapore and the pressing demands on both caregivers and care providers, an integrated approach to ageing-friendly and ability-sensitive enabling environment becomes an imperative. This particularly applies to nursing home environments and their immediate surroundings, as they are becoming one of the main available options of long-term care for many senior adults who are unable to age at home. Yet, despite the considerable efforts to break the still predominant clinical approach to eldercare and to introduce more home-like design and person-centric care model, nursing homes keep being stigmatised and perceived as not so desirable environments to grow old in. The challenges are further emphasised by the associated physical, sensorial, psychological and cognitive declines that are the common consequences of ageing. Such declines have an immense impact on almost all aspects of older adults’ daily functioning, including problems with mobility and spatial orientation, difficulties in communication, withdrawal from social interaction, higher level of depression and decreased sense of independence and autonomy. However, typical nursing home designs tend to neglect the full capacities of balanced and carefully integrated multisensory stimuli as active component of care and ability building. This paper outlines part of a larger multi-disciplinary study of six nursing homes in Singapore, with overarching objectives to create new models of supportive nursing home environments that go beyond the clinical care model and encourage community integration with the nursing home settings. The paper focuses on the largely neglected aspects of sensorial comfort and multi-sensorial properties of nursing homes, including both indoor and immediate outdoor spaces (boundaries). The objective was to investigate the sensory rhythms and explore their role in nursing home users’ daily routine and therapeutic capacities. Socio-sensory rhythms were captured and analysed through a combination of on-site sensory recordings of “objective” quantitative sensory data (air temperature and humidity, sound level and luminance) using multi-function environment meter, perceived experienced data, spatial mapping, first-person observations of nursing home users’ activity patterns, and interviews. This was done in addition to employment of available assessment tools, such as Wisconsin Person Directed Care assessment tool, Dementia Quality of Life [DQoL] instrument, and Resident Environment Impact Scale [REIS], as these tools address the issues of sensorial experience insufficiently and selectively. Key findings indicate varied levels of sensory comfort, as well as diversity, intensity, and customisation of multi-sensory conditions within different nursing home spaces. Sensory stimulation is typically concentrated in communal living areas of the nursing homes or in the areas that often provide controlled or limited access, including specifically designed sensory rooms and outdoor green spaces (gardens and terraces). Opportunities for sensory stimulation are particularly limited for bed-bound senior residents and within more functional areas, such as corridors. This suggests that the capacities of nursing home designs to provide more diverse and better integrated pleasant sensory conditions as integrated “therapeutic devices” to build nursing home residents’ physical and mental abilities, encourage activity and improve wellbeing are far from exhausted.

Keywords: ageing-supportive environment, enabling design, multi-sensory assessment, nursing home environment

Procedia PDF Downloads 170
23 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales

Authors: Philipp Sommer, Amgad Agoub

Abstract:

The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.

Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning

Procedia PDF Downloads 55
22 Enabling Rather Than Managing: Organizational and Cultural Innovation Mechanisms in a Heterarchical Organization

Authors: Sarah M. Schoellhammer, Stephen Gibb

Abstract:

Bureaucracy, in particular, its core element, a formal and stable hierarchy of authority, is proving less and less appropriate under the conditions of today’s knowledge economy. Centralization and formalization were consistently found to hinder innovation, undermining cross-functional collaboration, personal responsibility, and flexibility. With its focus on systematical planning, controlling and monitoring the development of new or improved solutions for customers, even innovation management as a discipline is to a significant extent based on a mechanistic understanding of organizations. The most important drivers of innovation, human creativity, and initiative, however, can be more hindered than supported by central elements of classic innovation management, such as predefined innovation strategies, rigid stage gate processes, and decisions made in management gate meetings. Heterarchy, as an alternative network form of organization, is essentially characterized by its dynamic influence structures, whereby the biggest influence is allocated by the collective to the persons perceived the most competent in a certain issue. Theoretical arguments that the non-hierarchical concept better supports innovation than bureaucracy have been supported by empirical research. These prior studies either focus on the structure and general functioning of non-hierarchical organizations or on their innovativeness, that means innovation as an outcome. Complementing classic innovation management approaches, this work aims to shed light on how innovations are initiated and realized in heterarchies in order to identify alternative solutions practiced under conditions of the post-bureaucratic organization. Through an initial individual case study, which is part of a multiple-case project, the innovation practices of an innovative and highly heterarchical medium-sized company in the German fire engineering industry are investigated. In a pragmatic mixed methods approach media resonance, company documents, and workspace architecture are analyzed, in addition to qualitative interviews with the CEO and employees of the case company, as well as a quantitative survey aiming to characterize the company along five scaled dimensions of a heterarchy spectrum. The analysis reveals some similarities and striking differences to approaches suggested by classic innovation management. The studied heterarchy has no predefined innovation strategy guiding new product and service development. Instead, strategic direction is provided by the CEO, described as visionary and creative. Procedures for innovation are hardly formalized, with new product ideas being evaluated on the basis of gut feeling and flexible, rather general criteria. Employees still being hesitant to take responsibility and make decisions, hierarchical influence is still prominent. Described as open-minded and collaborative, culture and leadership were found largely congruent with definitions of innovation culture. Overall, innovation efforts at the case company tend to be coordinated more through cultural than through formal organizational mechanisms. To better enable innovation in mainstream organizations, responsible practitioners are recommended not to limit changes to reducing the central elements of the bureaucratic organization, formalization, and centralization. The freedoms this entails need to be sustained through cultural coordination mechanisms, with personal initiative and responsibility by employees as well as common innovation-supportive norms and values. These allow to integrate diverse competencies, opinions, and activities and, thus, to guide innovation efforts.

Keywords: bureaucracy, heterarchy, innovation management, values

Procedia PDF Downloads 186
21 Design, Control and Implementation of 3.5 kW Bi-Directional Energy Harvester for Intelligent Green Energy Management System

Authors: P. Ramesh, Aby Joseph, Arya G. Lal, U. S. Aji

Abstract:

Integration of distributed green renewable energy sources in addition with battery energy storage is an inevitable requirement in a smart grid environment. To achieve this, an Intelligent Green Energy Management System (i-GEMS) needs to be incorporated to ensure coordinated operation between supply and load demand based on the hierarchy of Renewable Energy Sources (RES), battery energy storage and distribution grid. A bi-directional energy harvester is an integral component facilitating Intelligent Green Energy Management System (i-GEMS) and it is required to meet the technical challenges mentioned as follows: (1) capability for bi-directional mode of operation (buck/boost) (2) reduction of circuit parasitic to suppress voltage spikes (3) converter startup problem (4) high frequency magnetics (5) higher power density (6) mode transition issues during battery charging and discharging. This paper is focused to address the above mentioned issues and targeted to design, develop and implement a bi-directional energy harvester with galvanic isolation. In this work, the hardware architecture for bi-directional energy harvester rated 3.5 kW is developed with Isolated Full Bridge Boost Converter (IFBBC) as well as Dual Active Bridge (DAB) Converter configuration using modular power electronics hardware which is identical for both solar PV array and battery energy storage. In IFBBC converter, the current fed full bridge circuit is enabled and voltage fed full bridge circuit is disabled through Pulse Width Modulation (PWM) pulses for boost mode of operation and vice-versa for buck mode of operation. In DAB converter, all the switches are in active state so as to adjust the phase shift angle between primary full bridge and secondary full bridge which in turn decides the power flow directions depending on modes (boost/buck) of operation. Here, the control algorithm is developed to ensure the regulation of the common DC link voltage and maximum power extraction from the renewable energy sources depending on the selected mode (buck/boost) of operation. The circuit analysis and simulation study are conducted using PSIM 9.0 in three scenarios which are - 1.IFBBC with passive clamp, 2. IFBBC with active clamp, 3. DAB converter. In this work, a common hardware prototype for bi-directional energy harvester with 3.5 kW rating is built for IFBBC and DAB converter configurations. The power circuit is equipped with right choice of MOSFETs, gate drivers with galvanic isolation, high frequency transformer, filter capacitors, and filter boost inductor. The experiment was conducted for IFBBC converter with passive clamp under boost mode and the prototype confirmed the simulation results showing the measured efficiency as 88% at 2.5 kW output power. The digital controller hardware platform is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the bi-directional energy harvester is written in C language and developed using code composer studio. The comprehensive analyses of the power circuit design, control strategy for battery charging/discharging under buck/boost modes and comparative performance evaluation using simulation and experimental results will be presented.

Keywords: bi-directional energy harvester, dual active bridge, isolated full bridge boost converter, intelligent green energy management system, maximum power point tracking, renewable energy sources

Procedia PDF Downloads 140
20 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia PDF Downloads 153
19 Observations on Cultural Alternative and Environmental Conservation: Populations "Delayed" and Excluded from Health and Public Hygiene Policies in Mexico (1890-1930)

Authors: Marcela Davalos Lopez

Abstract:

The history of the circulation of hygienic knowledge and the consolidation of public health in Latin American cities towards the end of the 19th century is well known. Among them, Mexico City was inserted in international politics, strengthened institutions, medical knowledge, applied parameters of modernity and built sanitary engineering works. Despite the power that this hygienist system achieved, its scope was relative: it cannot be generalized to all cities. From a comparative and contextual analysis, it will be shown that conclusions derived from modern urban historiography present, from our contemporary observations, fractures. Between 1890 and 1930, the small cities and areas surrounding the Mexican capital adapted in their own way the international and federal public health regulations. This will be shown for neighborhoods located around Mexico City and in a medium city, close to the Mexican capital (about 80 km), called Cuernavaca. While the inhabitants of the neighborhoods kept awaiting the evolutionary process and the forms that public hygiene policies were taking (because they were witnesses and affected in their territories), in Cuernavaca, the dictates came as an echo. While the capital was drained, large roads were opened, roundabouts were erected, residents were expelled, and drains, sewers, drinking water pipes, etc., were built; Cuernavaca was sheltered in other times and practices. What was this due to? Undoubtedly, the time and energy that it took politicians and the group of "scientists" to carry out these enormous works in the Mexican capital took them away from addressing the issue in remote villages. It was not until the 20th century that the federal hygiene policy began to be strengthened. Despite this, there are other factors that emphasize the particularities of each site. I would like to draw attention here to the different receptions that each town prepared on public hygiene. We will see that Cuernavaca responded to its own semi-rural culture, history, orography and functions, prolonging for much longer, for example, the use of its deep ravines as sewers. For their part, the neighborhoods surrounding the capital, although affected and excluded from hygienist policies, chose to move away from them and solve the deficiencies with their own resources (they resorted to the waste that was left from the dried lake of Mexico to continue their lake practices). All of this points to a paradox that shapes our contemporary concerns: on the one hand, the benefits derived from medical knowledge and its technological applications (in this work referring particularly to the urban health system) and, on the other, the alteration it caused in environmental settings. Places like Cuernavaca (classified by the nineteenth-century and hygienists of the first decades of the twentieth century as backward), as well as landscapes such as neighborhoods, affected by advances in sanitary engineering, keep in their memory buried practices that we observe today as possible ways to reestablish environmental balances: alternative uses of water; recycling of organic materials; local uses of fauna; various systems for breaking down excreta, and so on. In sum, what the nineteenth and first half of the twentieth centuries graduated as levels of backwardness or progress, turn out to be key information to rethink the routes of environmental conservation. When we return to the observations of the scientists, politicians and lawyers of that period, we find historically rejected cultural alterity. Populations such as Cuernavaca that, due to their history, orography and/or insufficiency of federal policies, kept different relationships with the environment, today give us clues to reorient basic elements of cities: alternative uses of water, waste of raw materials, organic or consumption of local products, among others. It is, therefore, a matter of unearthing the rejected that cries out to emerge to the surface.

Keywords: sanitary hygiene, Mexico city, cultural alterity, environmental conservation, environmental history

Procedia PDF Downloads 164
18 An Innovation Decision Process View in an Adoption of Total Laboratory Automation

Authors: Chia-Jung Chen, Yu-Chi Hsu, June-Dong Lin, Kun-Chen Chan, Chieh-Tien Wang, Li-Ching Wu, Chung-Feng Liu

Abstract:

With fast advances in healthcare technology, various total laboratory automation (TLA) processes have been proposed. However, adopting TLA needs quite high funding. This study explores an early adoption experience by Taiwan’s large-scale hospital group, the Chimei Hospital Group (CMG), which owns three branch hospitals (Yongkang, Liouying and Chiali, in order by service scale), based on the five stages of Everett Rogers’ Diffusion Decision Process. 1.Knowledge stage: Over the years, two weaknesses exists in laboratory department of CMG: 1) only a few examination categories (e.g., sugar testing and HbA1c) can now be completed and reported within a day during an outpatient clinical visit; 2) the Yongkang Hospital laboratory space is dispersed across three buildings, resulting in duplicated investment in analysis instruments and inconvenient artificial specimen transportation. Thus, the senior management of the department raised a crucial question, was it time to process the redesign of the laboratory department? 2.Persuasion stage: At the end of 2013, Yongkang Hospital’s new building and restructuring project created a great opportunity for the redesign of the laboratory department. However, not all laboratory colleagues had the consensus for change. Thus, the top managers arranged a series of benchmark visits to stimulate colleagues into being aware of and accepting TLA. Later, the director of the department proposed a formal report to the top management of CMG with the results of the benchmark visits, preliminary feasibility analysis, potential benefits and so on. 3.Decision stage: This TLA suggestion was well-supported by the top management of CMG and, finally, they made a decision to carry out the project with an instrument-leasing strategy. After the announcement of a request for proposal and several vendor briefings, CMG confirmed their laboratory automation architecture and finally completed the contracts. At the same time, a cross-department project team was formed and the laboratory department assigned a section leader to the National Taiwan University Hospital for one month of relevant training. 4.Implementation stage: During the implementation, the project team called for regular meetings to review the results of the operations and to offer an immediate response to the adjustment. The main project tasks included: 1) completion of the preparatory work for beginning the automation procedures; 2) ensuring information security and privacy protection; 3) formulating automated examination process protocols; 4) evaluating the performance of new instruments and the instrument connectivity; 5)ensuring good integration with hospital information systems (HIS)/laboratory information systems (LIS); and 6) ensuring continued compliance with ISO 15189 certification. 5.Confirmation stage: In short, the core process changes include: 1) cancellation of signature seals on the specimen tubes; 2) transfer of daily examination reports to a data warehouse; 3) routine pre-admission blood drawing and formal inpatient morning blood drawing can be incorporated into an automatically-prepared tube mechanism. The study summarizes below the continuous improvement orientations: (1) Flexible reference range set-up for new instruments in LIS. (2) Restructure of the specimen category. (3) Continuous review and improvements to the examination process. (4) Whether installing the tube (specimen) delivery tracks need further evaluation.

Keywords: innovation decision process, total laboratory automation, health care

Procedia PDF Downloads 418
17 Sinhala Sign Language to Grammatically Correct Sentences using NLP

Authors: Anjalika Fernando, Banuka Athuraliya

Abstract:

This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired community

Keywords: Sinhala sign language, sign Language, NLP, LSTM, NMT

Procedia PDF Downloads 103
16 Crowdfunding: Could it be Beneficial to Social Entrepreneurship

Authors: Berrachid Dounia, Bellihi Hassan

Abstract:

The financial crisis made a barrier in front of small projects that are looking for funding, but in the other hand it has had at least an interesting side effect which is the rise of alternative and increasingly creative forms of financing. The traditional forms of financing has known a recession due to the new difficult situation of economical recession that all parts of the world have known. Having an innovating idea that has an effect on both sides, the economic one and social one is very beneficial for those who wants to get rid of the economical crisis. In this case, entrepreneurs who want to be successful are looking for the means of financing that are going to get their projects to the reality. The financing could be various, whether the entrepreneur can use his own resources, or go to the three “Fs”(Family, friends, and fools),look for Angel Investors, or try for the academic solution like universities and private incubators, but sometimes, entrepreneurs feels uncomfortable about those means and start looking to newer, less traditional forms of financing their projects. In the last few years, people have shown a great interest to the use of internet for many reasons (information, social networking, communication, entertainment, transaction, etc.). The use of internet facilitates relations between people and eases the maintenance of existing relationships ,it increases also the number of exchanges which leads to a “collective creativity”, moreover, internet gives an opportunity to create new tool for mobilizing civil society, which makes the participation in a project company much easier. The new atmosphere of business forces the project leaders to look for new solution of financing that cut out the financial intermediaries. Using platforms in order to finance projects is an alternative that is changing the traditional solutions of financing projects. New creative ways of lending money appears like Peer to Peer (person to person or P2P)lending. This digital directly intermediary got his origins from microcredit principles. Crowdfunding also, like P2P, involves getting individuals to pool their resources to finance a project without a typical financial intermediary. For Lambert and Schwienbacher "Crowdfunding involves an open call, essentially through the Internet, for the provision of financial resources either in the form of donations (without rewards) or in exchange for some form of reward and/or voting rights in order to support initiatives for specific purposes". The idea of this proposal for investors and entrepreneurs is to encourage small contributions from a large number of funders "the crowd" in order to raise money to fund projects. All those conditions made from crowdfunding a useful alternative to project leaders, and especially the ones who are carrying special ideas that need special funds. As mentioned before by Laflamme. S. et Lafortune. S. internet is a tool for mobilizing civil society. In our case, the crowdfunding is the tool that funds social entrepreneurship, in the case of not for profit organizations, it focuses his attention on social problems which could be resolved by mobilizing different resources, creating innovative initiatives, and building new social arrangements which call up the civil society. Social entrepreneurs are mostly the ones who goes onto crowdfunding web site, so they propose the amount which is expected to realize their project and then they receive the funds from crowd funders. Something the crowd funders expect something in return, like a product from the business (a sample from a product (case of a cooperative) or a CD (in the case of films or songs)), but not their money back. Thus, we cannot say that their lands are donations, because a donator did not expect anything back. However, in order to encourage "crowd-funders", rewards motivates people to get interested by projects and made some money from internet. The operation of crowd funding is making all parts satisfied investors, entrepreneurs and also crowdfunding sites owners. This paper aims to give a view of the mechanism of crowdfunding, by clarifying the techniques and its different categories, and social entrepreneurship as a sponsor of social development. Also, it aims to show how this alternative of financing could be beneficial for social entrepreneurs and how it is bringing a solution to fund social projects. The article concludes with a discussion of the contribution of crowdfunding in social entrepreneurship especially in the Moroccan context.

Keywords: crowd-funding, social entrepreneurship, projects funding, financing

Procedia PDF Downloads 376
15 Moths of Indian Himalayas: Data Digging for Climate Change Monitoring

Authors: Angshuman Raha, Abesh Kumar Sanyal, Uttaran Bandyopadhyay, Kaushik Mallick, Kamalika Bhattacharyya, Subrata Gayen, Gaurab Nandi Das, Mohd. Ali, Kailash Chandra

Abstract:

Indian Himalayan Region (IHR), due to its sheer latitudinal and altitudinal expanse, acts as a mixing ground for different zoogeographic faunal elements. The innumerable unique and distributional restricted rare species of IHR are constantly being threatened with extinction by the ongoing climate change scenario. Many of which might have faced extinction without even being noticed or discovered. Monitoring the community dynamics of a suitable taxon is indispensable to assess the effect of this global perturbation at micro-habitat level. Lepidoptera, particularly moths are suitable for this purpose due to their huge diversity and strict herbivorous nature. The present study aimed to collate scattered historical records of moths from IHR and spatially disseminate the same in Geographic Information System (GIS) domain. The study also intended to identify moth species with significant altitudinal shifts which could be prioritised for monitoring programme to assess the effect of climate change on biodiversity. A robust database on moths recorded from IHR was prepared from voluminous secondary literature and museum collections. Historical sampling points were transformed into richness grids which were spatially overlaid on altitude, annual precipitation and vegetation layers separately to show moth richness patterns along major environmental gradients. Primary samplings were done by setting standard light traps at 11 Protected Areas representing five Indian Himalayan biogeographic provinces. To identify significant altitudinal shifts, past and present altitudinal records of the identified species from primary samplings were compared. A consolidated list of 4107 species belonging to 1726 genera of 62 families of moths was prepared from a total of 10,685 historical records from IHR. Family-wise assemblage revealed Erebidae to be the most speciose family with 913 species under 348 genera, followed by Geometridae with 879 species under 309 genera and Noctuidae with 525 species under 207 genera. Among biogeographic provinces, Central Himalaya represented maximum records with 2248 species, followed by Western and North-western Himalaya with 1799 and 877 species, respectively. Spatial analysis revealed species richness was more or less uniform (up to 150 species record per cell) across IHR. Throughout IHR, the middle elevation zones between 1000-2000m encompassed high species richness. Temperate coniferous forest associated with 1500-2000mm rainfall zone showed maximum species richness. Total 752 species of moths were identified representing 23 families from the present sampling. 13 genera were identified which were restricted to specialized habitats of alpine meadows over 3500m. Five historical localities with high richness of >150 species were selected which could be considered for repeat sampling to assess climate change influence on moth assemblage. Of the 7 species exhibiting significant altitudinal ascend of >2000m, Trachea auriplena, Diphtherocome fasciata (Noctuidae) and Actias winbrechlini (Saturniidae) showed maximum range shift of >2500m, indicating intensive monitoring of these species. Great Himalayan National Park harbours most diverse assemblage of high-altitude restricted species and should be a priority site for habitat conservation. Among the 13 range restricted genera, Arichanna, Opisthograptis, Photoscotosia (Geometridae), Phlogophora, Anaplectoides and Paraxestia (Noctuidae) were dominant and require rigorous monitoring, as they are most susceptible to climatic perturbations.

Keywords: altitudinal shifts, climate change, historical records, Indian Himalayan region, Lepidoptera

Procedia PDF Downloads 168
14 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping

Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung

Abstract:

Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.

Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)

Procedia PDF Downloads 255
13 ARGO: An Open Designed Unmanned Surface Vehicle Mapping Autonomous Platform

Authors: Papakonstantinou Apostolos, Argyrios Moustakas, Panagiotis Zervos, Dimitrios Stefanakis, Manolis Tsapakis, Nektarios Spyridakis, Mary Paspaliari, Christos Kontos, Antonis Legakis, Sarantis Houzouris, Konstantinos Topouzelis

Abstract:

For years unmanned and remotely operated robots have been used as tools in industry research and education. The rapid development and miniaturization of sensors that can be attached to remotely operated vehicles in recent years allowed industry leaders and researchers to utilize them as an affordable means for data acquisition in air, land, and sea. Despite the recent developments in the ground and unmanned airborne vehicles, a small number of Unmanned Surface Vehicle (USV) platforms are targeted for mapping and monitoring environmental parameters for research and industry purposes. The ARGO project is developed an open-design USV equipped with multi-level control hardware architecture and state-of-the-art sensors and payloads for the autonomous monitoring of environmental parameters in large sea areas. The proposed USV is a catamaran-type USV controlled over a wireless radio link (5G) for long-range mapping capabilities and control for a ground-based control station. The ARGO USV has a propulsion control using 2x fully redundant electric trolling motors with active vector thrust for omnidirectional movement, navigation with opensource autopilot system with high accuracy GNSS device, and communication with the 2.4Ghz digital link able to provide 20km of Line of Sight (Los) range distance. The 3-meter dual hull design and composite structure offer well above 80kg of usable payload capacity. Furthermore, sun and friction energy harvesting methods provide clean energy to the propulsion system. The design is highly modular, where each component or payload can be replaced or modified according to the desired task (industrial or research). The system can be equipped with Multiparameter Sonde, measuring up to 20 water parameters simultaneously, such as conductivity, salinity, turbidity, dissolved oxygen, etc. Furthermore, a high-end multibeam echo sounder can be installed in a specific boat datum for shallow water high-resolution seabed mapping. The system is designed to operate in the Aegean Sea. The developed USV is planned to be utilized as a system for autonomous data acquisition, mapping, and monitoring bathymetry and various environmental parameters. ARGO USV can operate in small or large ports with high maneuverability and endurance to map large geographical extends at sea. The system presents state of the art solutions in the following areas i) the on-board/real-time data processing/analysis capabilities, ii) the energy-independent and environmentally friendly platform entirely made using the latest aeronautical and marine materials, iii) the integration of advanced technology sensors, all in one system (photogrammetric and radiometric footprint, as well as its connection with various environmental and inertial sensors) and iv) the information management application. The ARGO web-based application enables the system to depict the results of the data acquisition process in near real-time. All the recorded environmental variables and indices are presented, allowing users to remotely access all the raw and processed information using the implemented web-based GIS application.

Keywords: monitor marine environment, unmanned surface vehicle, mapping bythometry, sea environmental monitoring

Procedia PDF Downloads 138
12 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction

Authors: Leila Safazadeh, Brad Berron

Abstract:

Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.

Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting

Procedia PDF Downloads 224
11 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations

Authors: Nanine Fouche

Abstract:

The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.

Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance

Procedia PDF Downloads 174
10 Synthetic Method of Contextual Knowledge Extraction

Authors: Olga Kononova, Sergey Lyapin

Abstract:

Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.

Keywords: contextual knowledge, contextual search, e-library services, frequency-ranked query, paragraph-oriented query, technologies of the contextual knowledge extraction

Procedia PDF Downloads 358