Search results for: dual-SLA indicators for computing force applications
371 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle
Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores
Abstract:
This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino
Procedia PDF Downloads 174370 Photoemission Momentum Microscopy of Graphene on Ir (111)
Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense
Abstract:
Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.Keywords: band structure, graphene, momentum microscopy, LDAD
Procedia PDF Downloads 340369 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries
Authors: Eva Masson, Andrea Kübler
Abstract:
Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG
Procedia PDF Downloads 40368 A Survey of Digital Health Companies: Opportunities and Business Model Challenges
Authors: Iris Xiaohong Quan
Abstract:
The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.Keywords: digital health, business models, entrepreneurship opportunities, healthcare
Procedia PDF Downloads 183367 Sustainable Production of Pharmaceutical Compounds Using Plant Cell Culture
Authors: David A. Ullisch, Yantree D. Sankar-Thomas, Stefan Wilke, Thomas Selge, Matthias Pump, Thomas Leibold, Kai Schütte, Gilbert Gorr
Abstract:
Plants have been considered as a source of natural substances for ages. Secondary metabolites from plants are utilized especially in medical applications but are more and more interesting as cosmetical ingredients and in the field of nutraceuticals. However, supply of compounds from natural harvest can be limited by numerous factors i.e. endangered species, low product content, climate impacts and cost intensive extraction. Especially in the pharmaceutical industry the ability to provide sufficient amounts of product and high quality are additional requirements which in some cases are difficult to fulfill by plant harvest. Whereas in many cases the complexity of secondary metabolites precludes chemical synthesis on a reasonable commercial basis, plant cells contain the biosynthetic pathway – a natural chemical factory – for a given compound. A promising approach for the sustainable production of natural products can be plant cell fermentation (PCF®). A thoroughly accomplished development process comprises the identification of a high producing cell line, optimization of growth and production conditions, the development of a robust and reliable production process and its scale-up. In order to address persistent, long lasting production, development of cryopreservation protocols and generation of working cell banks is another important requirement to be considered. So far the most prominent example using a PCF® process is the production of the anticancer compound paclitaxel. To demonstrate the power of plant suspension cultures here we present three case studies: 1) For more than 17 years Phyton produces paclitaxel at industrial scale i.e. up to 75,000 L in scale. With 60 g/kg dw this fully controlled process which is applied according to GMP results in outstanding high yields. 2) Thapsigargin is another anticancer compound which is currently isolated from seeds of Thapsia garganica. Thapsigargin is a powerful cytotoxin – a SERCA inhibitor – and the precursor for the derivative ADT, the key ingredient of the investigational prodrug Mipsagargin (G-202) which is in several clinical trials. Phyton successfully generated plant cell lines capable to express this compound. Here we present data about the screening for high producing cell lines. 3) The third case study covers ingenol-3-mebutate. This compound is found in the milky sap of the intact plants of the Euphorbiacae family at very low concentrations. Ingenol-3-mebutate is used in Picato® which is approved against actinic keratosis. Generation of cell lines expressing significant amounts of ingenol-3-mebutate is another example underlining the strength of plant cell culture. The authors gratefully acknowledge Inspyr Therapeutics for funding.Keywords: Ingenol-3-mebutate, plant cell culture, sustainability, thapsigargin
Procedia PDF Downloads 251366 Recent Advances in Research on Carotenoids: From Agrofood Production to Health Outcomes
Authors: Antonio J. Melendez-Martinez
Abstract:
Beyond their role as natural colorants, some carotenoids are provitamins A and may be involved in health-promoting biological actions and contribute to reducing the risk of developing non-communicable diseases, including several types of cancer, cardiovascular disease, eye conditions, skin disorders or metabolic disorders. Given the versatility of carotenoids, the COST-funded European network to advance carotenoid research and applications in agro-food and health (EUROCAROTEN) is aimed at promoting health through the diet and increasing well-being by means. Stakeholders from 38 countries participate in this network, and one of its main objectives is to promote research on little-studied carotenoids. In this contribution, recent advances of our research group and collaborators in the study of two such understudied carotenoids, namely phytoene and phytofluene, the colorless carotenoids, are outlined. The study of these carotenoids is important as they have been largely neglected despite they are present in our diets, fluids, and tissues, and evidence is accumulating that they may be involved in health-promoting actions. More specifically, studies on their levels in diverse tomato and orange varieties were carried out as well as on their potential bioavailability from different dietary sources. Furthermore, the potential effect of these carotenoids on an animal model subjected to oxidative stress was evaluated. The tomatoes were grown in research greenhouses, and some of them were subjected to regulated deficit irrigation, a sustainable agronomic practice. The citrus samples were obtained from an experimental field. The levels of carotenoids were assessed using HPLC according to routine methodologies followed in our lab. Regarding the potential bioavailability (bioaccessibility) studies, different products containing colorless carotenoids, like fruits, juices, were subjected to simulated in vitro digestions, and their incorporation into mixed micelles was assessed. The effect of the carotenoids on oxidative stress was evaluated on the Caenorhabditis elegans model. For that purpose, the worms were subjected to oxidative stress by means of a hydrogen peroxide challenge. In relation to the presence of colorless carotenoids in tomatoes and orange varieties, it was observed that they are widespread in such products and that there are mutants with very high quantities of them, for instance, the Cara Cara or Pinalate mutant oranges. The studies on their bioaccessibility revealed that, in general, phytoene and phytofluene are more bioaccessible than other common dietary carotenoids, probably due to their distinctive chemical structure. About the in vivo antioxidant capacity of phytoene and phytofluene, it was observed that they both exerted antioxidant effects at certain doses. In conclusion, evidence on the importance of phytoene and phytofluene as dietary easily bioavailable and antioxidant carotenoids has been obtained in recent studies from our group, which can be important shortly to innovate in health-promotion through the development of functional foods and related products.Keywords: carotenoids, health, functional foods, nutrition, phytoene, phytofluene
Procedia PDF Downloads 103365 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities
Authors: Shaurya Chauhan, Sagar Gupta
Abstract:
Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.Keywords: open source, public participation, urbanization, urban development
Procedia PDF Downloads 149364 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data
Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder
Abstract:
Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods
Procedia PDF Downloads 253363 Learning And Teaching Conditions For Students With Special Needs: Asset-Oriented Perspectives And Approaches
Authors: Dr. Luigi Iannacci
Abstract:
This research critically explores the current educational landscape with respect to special education and dominant deficit/medical model discourses that continue to forward unresponsive problematic approaches to teaching students with disabilities. Asset-oriented perspectives and social/critical models of disability are defined and explicated in order to offer alternatives to these dominant discourses. To that end, a framework that draws on Brian Camborne’s conditions of learning and applications of his work in relation to instruction conceptualize learning conditions and their significance to students with special needs. Methodologically, the research is designed as Critical Narrative Inquiry (CNI). Critical incidents, interviews, documents, artefacts etc. are drawn on and narratively constructed to explore how disability is presently configured in language, discourses, pedagogies and interactions with students deemed disabled. This data was collected using ethnographic methods and as such, through participant-observer field work that occurred directly in classrooms. This narrative approach aims to make sense of complex classroom interactions and ways of reconceptualizing approaches to students with special needs. CNI is situated in the critical paradigm and primarily concerned with culture, language and participation as issues of power in need of critique with the intent of change in the direction of social justice. Research findings highlight the ways in which Cambourne’s learning conditions, such as demonstration, approximation, engagement, responsibility, immersion, expectation, employment (transfer, use), provide a clear understanding of what is central to and constitutes a responsive and inclusive this instructional frame. Examples of what each of these conditions look like in practice are therefore offered in order to concretely demonstrate the ways in which various pedagogical choices and questions can enable classroom spaces to be responsive to the assets and challenges students with special needs have and experience. These particular approaches are also illustrated through an exploration of multiliteracies theory and pedagogy and what this research and approach allows educators to draw on, facilitate and foster in terms of the ways in which students with special needs can make sense of and demonstrate their understanding of skills, content and knowledge. The contextual information, theory, research and instructional frame focused on throughout this inquiry ultimately demonstrate what inclusive classroom spaces and practice can look like. These perspectives and conceptualizations are in stark contrast to dominant deficit driven approaches that ensure current pedagogically impoverished teaching focused on narrow, limited and limiting understandings of special needs learners and their ways of knowing and acquiring/demonstrating knowledge.Keywords: asset-oriented approach, social/critical model of disability, conditions for learning and teaching, students with special needs
Procedia PDF Downloads 69362 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf
Authors: Simran Gill, Samuel Bartels
Abstract:
Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.Keywords: broadleaf weeds, nitrogen, iron, turfgrass
Procedia PDF Downloads 73361 Guests’ Satisfaction and Intention to Revisit Smart Hotels: Qualitative Interviews Approach
Authors: Raymond Chi Fai Si Tou, Jacey Ja Young Choe, Amy Siu Ian So
Abstract:
Smart hotels can be defined as the hotel which has an intelligent system, through digitalization and networking which achieve hotel management and service information. In addition, smart hotels include high-end designs that integrate information and communication technology with hotel management fulfilling the guests’ needs and improving the quality, efficiency and satisfaction of hotel management. The purpose of this study is to identify appropriate factors that may influence guests’ satisfaction and intention to revisit Smart Hotels based on service quality measurement of lodging quality index and extended UTAUT theory. Unified Theory of Acceptance and Use of Technology (UTAUT) is adopted as a framework to explain technology acceptance and use. Since smart hotels are technology-based infrastructure hotels, UTATU theory could be as the theoretical background to examine the guests’ acceptance and use after staying in smart hotels. The UTAUT identifies four key drivers of the adoption of information systems: performance expectancy, effort expectancy, social influence, and facilitating conditions. The extended UTAUT modifies the definitions of the seven constructs for consideration; the four previously cited constructs of the UTAUT model together with three new additional constructs, which including hedonic motivation, price value and habit. Thus, the seven constructs from the extended UTAUT theory could be adopted to understand their intention to revisit smart hotels. The service quality model will also be adopted and integrated into the framework to understand the guests’ intention of smart hotels. There are rare studies to examine the service quality on guests’ satisfaction and intention to revisit in smart hotels. In this study, Lodging Quality Index (LQI) will be adopted to measure the service quality in smart hotels. Using integrated UTAUT theory and service quality model because technological applications and services require using more than one model to understand the complicated situation for customers’ acceptance of new technology. Moreover, an integrated model could provide more perspective insights to explain the relationships of the constructs that could not be obtained from only one model. For this research, ten in-depth interviews are planned to recruit this study. In order to confirm the applicability of the proposed framework and gain an overview of the guest experience of smart hotels from the hospitality industry, in-depth interviews with the hotel guests and industry practitioners will be accomplished. In terms of the theoretical contribution, it predicts that the integrated models from the UTAUT theory and the service quality will provide new insights to understand factors that influence the guests’ satisfaction and intention to revisit smart hotels. After this study identifies influential factors, smart hotel practitioners could understand which factors may significantly influence smart hotel guests’ satisfaction and intention to revisit. In addition, smart hotel practitioners could also provide outstanding guests experience by improving their service quality based on the identified dimensions from the service quality measurement. Thus, it will be beneficial to the sustainability of the smart hotels business.Keywords: intention to revisit, guest satisfaction, qualitative interviews, smart hotels
Procedia PDF Downloads 208360 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions
Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita
Abstract:
Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly
Procedia PDF Downloads 246359 Characteristics of Bio-hybrid Hydrogel Materials with Prolonged Release of the Model Active Substance as Potential Wound Dressings
Authors: Katarzyna Bialik-Wąs, Klaudia Pluta, Dagmara Malina, Małgorzata Miastkowska
Abstract:
In recent years, biocompatible hydrogels have been used more and more in medical applications, especially as modern dressings and drug delivery systems. The main goal of this research was the characteristics of bio-hybrid hydrogel materials incorporated with the nanocarrier-drug system, which enable the release in a gradual and prolonged manner, up to 7 days. Therefore, the use of such a combination will provide protection against mechanical damage and adequate hydration. The proposed bio-hybrid hydrogels are characterized by: transparency, biocompatibility, good mechanical strength, and the dual release system, which allows for gradual delivery of the active substance, even up to 7 days. Bio-hybrid hydrogels based on sodium alginate (SA), poly(vinyl alcohol) (PVA), glycerine, and Aloe vera solution (AV) were obtained through the chemical crosslinking method using poly(ethylene glycol) diacrylate as a crosslinking agent. Additionally, a nanocarrier-drug system was incorporated into SA/PVA/AV hydrogel matrix. Here, studies were focused on the release profiles of active substances from bio-hybrid hydrogels using the USP4 method (DZF II Flow-Through System, Erweka GmbH, Langen, Germany). The equipment incorporated seven in-line flow-through diffusion cells. The membrane was placed over support with an orifice of 1,5 cm in diameter (diffusional area, 1.766 cm²). All the cells were placed in a cell warmer connected with the Erweka heater DH 2000i and the Erweka piston pump HKP 720. The piston pump transports the receptor fluid via seven channels to the flow-through cells and automatically adapts the setting of the flow rate. All volumes were measured by gravimetric methods by filling the chambers with Milli-Q water and assuming a density of 1 g/ml. All the determinations were made in triplicate for each cell. The release study of the model active substance was carried out using a regenerated cellulose membrane Spectra/Por®Dialysis Membrane MWCO 6-8,000 Carl Roth® Company. These tests were conducted in buffer solutions – PBS at pH 7.4. A flow rate of receptor fluid of about 4 ml /1 min was selected. The experiments were carried out for 7 days at a temperature of 37°C. The released concentration of the model drug in the receptor solution was analyzed using UV-Vis spectroscopy (Perkin Elmer Company). Additionally, the following properties of the modified materials were studied: physicochemical, structural (FT-IR analysis), morphological (SEM analysis). Finally, the cytotoxicity tests using in vitro method were conducted. The obtained results exhibited that the dual release system allows for the gradual and prolonged delivery of the active substances, even up to 7 days.Keywords: wound dressings, SA/PVA hydrogels, nanocarrier-drug system, USP4 method
Procedia PDF Downloads 148358 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 278357 Fast Detection of Local Fiber Shifts by X-Ray Scattering
Authors: Peter Modregger, Özgül Öztürk
Abstract:
Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination
Procedia PDF Downloads 63356 Method for Requirements Analysis and Decision Making for Restructuring Projects in Factories
Authors: Rene Hellmuth
Abstract:
The requirements for the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Restrictions regarding new areas, shorter life cycles of product and production technology as well as a VUCA (volatility, uncertainty, complexity and ambiguity) world cause more frequently occurring rebuilding measures within a factory. Restructuring of factories is the most common planning case today. Restructuring is more common than new construction, revitalization and dismantling of factories. The increasing importance of restructuring processes shows that the ability to change was and is a promising concept for the reaction of companies to permanently changing conditions. The factory building is the basis for most changes within a factory. If an adaptation of a construction project (factory) is necessary, the inventory documents must be checked and often time-consuming planning of the adaptation must take place to define the relevant components to be adapted, in order to be able to finally evaluate them. The different requirements of the planning participants from the disciplines of factory planning (production planner, logistics planner, automation planner) and industrial construction planning (architect, civil engineer) come together during reconstruction and must be structured. This raises the research question: Which requirements do the disciplines involved in the reconstruction planning place on a digital factory model? A subordinate research question is: How can model-based decision support be provided for a more efficient design of the conversion within a factory? Because of the high adaptation rate of factories and its building described above, a methodology for rescheduling factories based on the requirements engineering method from software development is conceived and designed for practical application in factory restructuring projects. The explorative research procedure according to Kubicek is applied. Explorative research is suitable if the practical usability of the research results has priority. Furthermore, it will be shown how to best use a digital factory model in practice. The focus will be on mobile applications to meet the needs of factory planners on site. An augmented reality (AR) application will be designed and created to provide decision support for planning variants. The aim is to contribute to a shortening of the planning process and model-based decision support for more efficient change management. This requires the application of a methodology that reduces the deficits of the existing approaches. The time and cost expenditure are represented in the AR tablet solution based on a building information model (BIM). Overall, the requirements of those involved in the planning process for a digital factory model in the case of restructuring within a factory are thus first determined in a structured manner. The results are then applied and transferred to a construction site solution based on augmented reality.Keywords: augmented reality, digital factory model, factory planning, restructuring
Procedia PDF Downloads 134355 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems
Procedia PDF Downloads 133354 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress
Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang
Abstract:
The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology
Procedia PDF Downloads 93353 Switchable Lipids: From a Molecular Switch to a pH-Sensitive System for the Drug and Gene Delivery
Authors: Jeanne Leblond, Warren Viricel, Amira Mbarek
Abstract:
Although several products have reached the market, gene therapeutics are still in their first stages and require optimization. It is possible to improve their lacking efficiency by the use of carefully engineered vectors, able to carry the genetic material through each of the biological barriers they need to cross. In particular, getting inside the cell is a major challenge, because these hydrophilic nucleic acids have to cross the lipid-rich plasmatic and/or endosomal membrane, before being degraded into lysosomes. It takes less than one hour for newly endocytosed liposomes to reach highly acidic lysosomes, meaning that the degradation of the carried gene occurs rapidly, thus limiting the transfection efficiency. We propose to use a new pH-sensitive lipid able to change its conformation upon protonation at endosomal pH values, leading to the disruption of the lipidic bilayer and thus to the fast release of the nucleic acids into the cytosol. It is expected that this new pH-sensitive mechanism promote endosomal escape of the gene, thereby its transfection efficiency. The main challenge of this work was to design a preparation presenting fast-responding lipidic bilayer destabilization properties at endosomal pH 5 while remaining stable at blood pH value and during storage. A series of pH-sensitive lipids able to perform a conformational switch upon acidification were designed and synthesized. Liposomes containing these switchable lipids, as well as co-lipids were prepared and characterized. The liposomes were stable at 4°C and pH 7.4 for several months. Incubation with siRNA led to the full entrapment of nucleic acids as soon as the positive/negative charge ratio was superior to 2. The best liposomal formulation demonstrated a silencing efficiency up to 10% on HeLa cells, very similar to a commercial agent, with a lowest toxicity than the commercial agent. Using flow cytometry and microscopy assays, we demonstrated that drop of pH was required for the transfection efficiency, since bafilomycin blocked the transfection efficiency. Additional evidence was brought by the synthesis of a negative control lipid, which was unable to switch its conformation, and consequently exhibited no transfection ability. Mechanistic studies revealed that the uptake was mediated through endocytosis, by clathrin and caveolae pathways, as reported for previous lipid nanoparticle systems. This potent system was used for the treatment of hypercholesterolemia. The switchable lipids were able to knockdown PCSK9 expression on human hepatocytes (Huh-7). Its efficiency is currently evaluated on in vivo mice model of PCSK9 KO mice. In summary, we designed and optimized a new cationic pH-sensitive lipid for gene delivery. Its transfection efficiency is similar to the best available commercial agent, without the usually associated toxicity. The promising results lead to its use for the treatment of hypercholesterolemia on a mice model. Anticancer applications and pulmonary chronic disease are also currently investigated.Keywords: liposomes, siRNA, pH-sensitive, molecular switch
Procedia PDF Downloads 204352 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology
Authors: Hemendra Singh Rathod
Abstract:
Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.Keywords: frequency control, grid stability, li-ion battery storage, smart grid
Procedia PDF Downloads 150351 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 73350 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity
Authors: Yuri Laevsky, Tatyana Nosova
Abstract:
The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation
Procedia PDF Downloads 302349 Applications of Digital Tools, Satellite Images and Geographic Information Systems in Data Collection of Greenhouses in Guatemala
Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.
Abstract:
During the last 20 years, the globalization of economies, population growth, and the increase in the consumption of fresh agricultural products have generated greater demand for ornamentals, flowers, fresh fruits, and vegetables, mainly from tropical areas. This market situation has demanded greater competitiveness and control over production, with more efficient protected agriculture technologies, which provide greater productivity and allow us to guarantee the quality and quantity that is required in a constant and sustainable way. Guatemala, located in the north of Central America, is one of the largest exporters of agricultural products in the region and exports fresh vegetables, flowers, fruits, ornamental plants, and foliage, most of which were grown in greenhouses. Although there are no official agricultural statistics on greenhouse production, several thesis works, and congress reports have presented consistent estimates. A wide range of protection structures and roofing materials are used, from the most basic and simple ones for rain control to highly technical and automated structures connected with remote sensors for monitoring and control of crops. With this breadth of technological models, it is necessary to analyze georeferenced data related to the cultivated area, to the different existing models, and to the covering materials, integrated with altitude, climate, and soil data. The georeferenced registration of the production units, the data collection with digital tools, the use of satellite images, and geographic information systems (GIS) provide reliable tools to elaborate more complete, agile, and dynamic information maps. This study details a methodology proposed for gathering georeferenced data of high protection structures (greenhouses) in Guatemala, structured in four phases: diagnosis of available information, the definition of the geographic frame, selection of satellite images, and integration with an information system geographic (GIS). It especially takes account of the actual lack of complete data in order to obtain a reliable decision-making system; this gap is solved through the proposed methodology. A summary of the results is presented in each phase, and finally, an evaluation with some improvements and tentative recommendations for further research is added. The main contribution of this study is to propose a methodology that allows to reduce the gap of georeferenced data in protected agriculture in this specific area where data is not generally available and to provide data of better quality, traceability, accuracy, and certainty for the strategic agricultural decision öaking, applicable to other crops, production models and similar/neighboring geographic areas.Keywords: greenhouses, protected agriculture, GIS, Guatemala, satellite image, digital tools, precision agriculture
Procedia PDF Downloads 194348 Primary-Color Emitting Photon Energy Storage Nanophosphors for Developing High Contrast Latent Fingerprints
Authors: G. Swati, D. Haranath
Abstract:
Commercially available long afterglow /persistent phosphors are proprietary materials and hence the exact composition and phase responsible for their luminescent characteristics such as initial intensity and afterglow luminescence time are not known. Further to generate various emission colors, commercially available persistence phosphors are physically blended with fluorescent organic dyes such as rodhamine, kiton and methylene blue etc. Blending phosphors with organic dyes results into complete color coverage in visible spectra, however with time, such phosphors undergo thermal and photo-bleaching. This results in the loss of their true emission color. Hence, the current work is dedicated studies on inorganic based thermally and chemically stable primary color emitting nanophosphors namely SrAl2O4:Eu2+, Dy3+, (CaZn)TiO3:Pr3+, and Sr2MgSi2O7:Eu2+, Dy3+. SrAl2O4: Eu2+, Dy3+ phosphor exhibits a strong excitation in UV and visible region (280-470 nm) with a broad emission peak centered at 514 nm is the characteristic emission of parity allowed 4f65d1→4f7 transitions of Eu2+ (8S7/2→2D5/2). Sunlight excitable Sr2MgSi2O7:Eu2+,Dy3+ nanophosphors emits blue color (464 nm) with Commercial international de I’Eclairage (CIE) coordinates to be (0.15, 0.13) with a color purity of 74 % with afterglow time of > 5 hours for dark adapted human eyes. (CaZn)TiO3:Pr3+ phosphor system possess high color purity (98%) which emits intense, stable and narrow red emission at 612 nm due intra 4f transitions (1D2 → 3H4) with afterglow time of 0.5 hour. Unusual property of persistence luminescence of these nanophoshphors supersedes background effects without losing sensitive information these nanophosphors offer several advantages of visible light excitation, negligible substrate interference, high contrast bifurcation of ridge pattern, non-toxic nature revealing finger ridge details of the fingerprints. Both level 1 and level 2 features from a fingerprint can be studied which are useful for used classification, indexing, comparison and personal identification. facile methodology to extract high contrast fingerprints on non-porous and porous substrates using a chemically inert, visible light excitable, and nanosized phosphorescent label in the dark has been presented. The chemistry of non-covalent physisorption interaction between the long afterglow phosphor powder and sweat residue in fingerprints has been discussed in detail. Real-time fingerprint development on porous and non-porous substrates has also been performed. To conclude, apart from conventional dark vision applications, as prepared primary color emitting afterglow phosphors are potentional candidate for developing high contrast latent fingerprints.Keywords: fingerprints, luminescence, persistent phosphors, rare earth
Procedia PDF Downloads 221347 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials
Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar
Abstract:
Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)
Procedia PDF Downloads 389346 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites
Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana
Abstract:
With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)
Procedia PDF Downloads 117345 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces
Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur
Abstract:
In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.Keywords: aerodynamic, bi-dimensional, vegetation, synergistic
Procedia PDF Downloads 269344 Performance Improvement of Piston Engine in Aeronautics by Means of Additive Manufacturing Technologies
Authors: G. Andreutti, G. Saccone, D. Lucariello, C. Pirozzi, S. Franchitti, R. Borrelli, C. Toscano, P. Caso, G. Ferraro, C. Pascarella
Abstract:
The reduction of greenhouse gases and pollution emissions is a worldwide environmental issue. The amount of CO₂ released by an aircraft is associated with the amount of fuel burned, so the improvement of engine thermo-mechanical efficiency and specific fuel consumption is a significant technological driver for aviation. Moreover, with the prospect that avgas will be phased out, an engine able to use more available and cheaper fuels is an evident advantage. An advanced aeronautical Diesel engine, because of its high efficiency and ability to use widely available and low-cost jet and diesel fuels, is a promising solution to achieve a more fuel-efficient aircraft. On the other hand, a Diesel engine has generally a higher overall weight, if compared with a gasoline one of same power performances. Fixing the MTOW, Max Take-Off Weight, and the operational payload, this extra-weight reduces the aircraft fuel fraction, partially vinifying the associated benefits. Therefore, an effort in weight saving manufacturing technologies is likely desirable. In this work, in order to achieve the mentioned goals, innovative Electron Beam Melting – EBM, Additive Manufacturing – AM technologies were applied to a two-stroke, common rail, GF56 Diesel engine, developed by the CMD Company for aeronautic applications. For this purpose, a consortium of academic, research and industrial partners, including CMD Company, Italian Aerospace Research Centre – CIRA, University of Naples Federico II and the University of Salerno carried out a technological project, funded by the Italian Minister of Education and Research – MIUR. The project aimed to optimize the baseline engine in order to improve its performance and increase its airworthiness features. This project was focused on the definition, design, development, and application of enabling technologies for performance improvement of GF56. Weight saving of this engine was pursued through the application of EBM-AM technologies and in particular using Arcam AB A2X machine, available at CIRA. The 3D printer processes titanium alloy micro-powders and it was employed to realize new connecting rods of the GF56 engine with an additive-oriented design approach. After a preliminary investigation of EBM process parameters and a thermo-mechanical characterization of titanium alloy samples, additive manufactured, innovative connecting rods were fabricated. These engine elements were structurally verified, topologically optimized, 3D printed and suitably post-processed. Finally, the overall performance improvement, on a typical General Aviation aircraft, was estimated, substituting the conventional engine with the optimized GF56 propulsion system.Keywords: aeronautic propulsion, additive manufacturing, performance improvement, weight saving, piston engine
Procedia PDF Downloads 144343 Initial Resistance Training Status Influences Upper Body Strength and Power Development
Authors: Stacey Herzog, Mitchell McCleary, Istvan Kovacs
Abstract:
Purpose: Maximal strength and maximal power are key athletic abilities in many sports disciplines. In recent years, velocity-based training (VBT) with a relatively high 75-85% 1RM resistance has been popularized in preparation for powerlifting and various other sports. The purpose of this study was to discover differences between beginner/intermediate and advanced lifters’ push/press performances after a heavy resistance-based BP training program. Methods: A six-week, three-workouts per week program was administered to 52 young, physically active adults (age: 22.4±5.1; 12 female). The majority of the participants (84.6%) had prior experience in bench pressing. Typical workouts began with BP using 75-95% 1RM in the 1-5 repetition range. The sets in the lower part of the range (75-80% 1RM) were performed with velocity-focus as well. The BP sets were followed by seated dumbbell presses and six additional upper-body assistance exercises. Pre- and post-tests were conducted on five test exercises: one-repetition maximum BP (1RM), calculated relative strength index: BP/BW (RSI), four-repetition maximal-effort dynamic BP for peak concentric velocity with 80% 1RM (4RV), 4-repetition ballistic pushups (BPU) for height (4PU), and seated medicine ball toss for distance (MBT). For analytic purposes, the participant group was divided into two subgroups: self-indicated beginner or intermediate initial resistance training status (BITS) [n=21, age: 21.9±3.6; 10 female] and advanced initial resistance training status (ATS) [n=31, age: 22.7±5.9; 2 female]. Pre- and post-test results were compared within subgroups. Results: Paired-sample t-tests indicated significant within-group improvements in all five test exercises in both groups (p < 0.05). BITS improved 18.1 lbs. (13.0%) in 1RM, 0.099 (12.8%) in RSI, 0.133 m/s (23.3%) in 4RV, 1.55 in. (27.1%) in BPU, and 1.00 ft. (5.8%) in MBT, while the ATS group improved 13.2 lbs. (5.7%) in 1RM, 0.071 (5.8%) in RSI, 0.051 m/s (9.1%) in 4RV, 1.20 in. (13.7%) in BPU, and 1.15 ft. (5.5%) in MBT. Conclusion: While the two training groups had different initial resistance training backgrounds, both showed significant improvements in all test exercises. As expected, the beginner/intermediate group displayed better relative improvements in four of the five test exercises. However, the medicine ball toss, which had the lightest resistance among the tests, showed similar relative improvements between the two groups. These findings relate to two important training principles: specificity and transfer. The ATS group had more specific experiences with heavy-resistance BP. Therefore, fewer improvements were detected in their test performances with heavy resistances. On the other hand, while the heavy resistance-based training transferred to increased power outcomes in light-resistance power exercises, the difference in the rate of improvement between the two groups disappeared. Practical applications: Based on initial training status, S&C coaches should expect different performance gains in maximal strength training-specific test exercises. However, the transfer from maximal strength to a non-training-specific performance category along the F-v curve continuum (i.e., light resistance and high velocity) might not depend on initial training status.Keywords: exercise, power, resistance training, strength
Procedia PDF Downloads 71342 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 109