Search results for: CNTs-silicon hybrid devices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3953

Search results for: CNTs-silicon hybrid devices

383 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.

Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime

Procedia PDF Downloads 126
382 Development of mHealth Information in Community Based on Geographical Information: A Case Study from Saraphi District, Chiang Mai, Thailand

Authors: Waraporn Boonchieng, Ekkarat Boonchieng, Wilawan Senaratana, Jaras Singkaew

Abstract:

Geographical information system (GIS) is a designated system widely used for collecting and analyzing geographical data. Since the introduction of ultra-mobile, 'smart' devices, investigators, clinicians, and even the general public have had powerful new tools for collecting, uploading and accessing information in the field. Epidemiology paired with GIS will increase the efficacy of preventive health care services. The objective of this study is to apply GPS location services that are available on the common mobile device with district health systems, storing data on our private cloud system. The mobile application has been developed for use on iOS, Android, and web-based platforms. The system consists of two parts of district health information, including recorded resident data forms and individual health recorded data forms, which were developed and approved by opinion sharing and public hearing. The application's graphical user interface was developed using HTML5 and PHP with MySQL as a database management system (DBMS). The reporting module of the developed software displays data in a variety of views, from traditional tables to various types of high-resolution, layered graphics, incorporating map location information with street views from Google Maps. Multi-extension exporting is also supported, utilizing standard platforms such as PDF, PNG, JPG, and XLS. The data were collected in the database beginning in March 2013, by district health volunteers and district youth volunteers who had completed the application training program. District health information consisted of patients’ household coordinates, individual health data, social and economic information. This was combined with Google Street View data, collected in March 2014. Studied groups consisted of 16,085 (67.87%) and 47,811 (59.87%) of the total 23,701 households and 79,855 people were collected by the system respectively, in Saraphi district, Chiang Mai Province. The report generated from the system has had a major benefit directly to the Saraphi District Hospital. Healthcare providers are able to use the basic health data to provide a specific home health care service and also to create health promotion activities according to medical needs of the people in the community.

Keywords: health, public health, GIS, geographic information system

Procedia PDF Downloads 308
381 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health

Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon

Abstract:

Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.

Keywords: behavioural change, framework, health, nudging, sustainability

Procedia PDF Downloads 202
380 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 66
379 Use of a Novel Intermittent Compression Shoe in Reducing Lower Limb Venous Stasis

Authors: Hansraj Riteesh Bookun, Cassandra Monique Hidajat

Abstract:

This pilot study investigated the efficacy of a newly designed shoe which will act as an intermittent pneumatic compression device to augment venous flow in the lower limb. The aim was to assess the degree with which a wearable intermittent compression device can increase the venous flow in the popliteal vein. Background: Deep venous thrombosis and chronic venous insufficiency are relatively common problems with significant morbidity and mortality. While mechanical and chemical thromboprophylaxis measures are in place in hospital environments (in the form of TED stockings, intermittent pneumatic compression devices, analgesia, antiplatelet and anticoagulant agents), there are limited options in a community setting. Additionally, many individuals are poorly tolerant of graduated compression stockings due to the difficulty in putting them on, their constant tightness and increased associated discomfort in warm weather. These factors may hinder the management of their chronic venous insufficiency. Method: The device is lightweight, easy to wear and comfortable, with a self-contained power source. It features a Bluetooth transmitter and can be controlled with a smartphone. It is externally almost indistinguishable from a normal shoe. During activation, two bladders are inflated -one overlying the metatarsal heads and the second at the pedal arch. The resulting cyclical increase in pressure squeezes blood into the deep venous system. This will decrease periods of stasis and potentially reduce the risk of deep venous thrombosis. The shoe was fitted to 2 healthy participants and the peak systolic velocity of flow in the popliteal vein was measured during and prior to intermittent compression phases. Assessments of total flow volume were also performed. All haemodynamic assessments were performed with ultrasound by a licensed sonographer. Results: Mean peak systolic velocity of 3.5 cm/s with standard deviation of 1.3 cm/s were obtained. There was a three fold increase in mean peak systolic velocity and five fold increase in total flow volume. Conclusion: The device augments venous flow in the leg significantly. This may contribute to lowered thromboembolic risk during periods of prolonged travel or immobility. This device may also serve as an adjunct in the treatment of chronic venous insufficiency. The study will be replicated on a larger scale in a multi—centre trial.

Keywords: venous, intermittent compression, shoe, wearable device

Procedia PDF Downloads 172
378 Modulating Photoelectrochemical Water-Splitting Activity by Charge-Storage Capacity of Electrocatalysts

Authors: Yawen Dai, Ping Cheng, Jian Ru Gong

Abstract:

Photoelctrochemical (PEC) water splitting using semiconductors (SCs) provides a convenient way to convert sustainable but intermittent solar energy into clean hydrogen energy, and it has been regarded as one of most promising technology to solve the energy crisis and environmental pollution in modern society. However, the record energy conversion efficiency of a PEC cell (~3%) is still far lower than the commercialization requirement (~10%). The sluggish kinetics of oxygen evolution reaction (OER) half reaction on photoanodes is a significant limiting factor of the PEC device efficiency, and electrocatalysts (ECs) are always deposited on SCs to accelerate the hole injection for OER. However, an active EC cannot guarantee enhanced PEC performance, since the newly emerged SC-EC interface complicates the interfacial charge behavior. Herein, α-Fe2O3 photoanodes coated with Co3O4 and CoO ECs are taken as the model system to glean fundamental understanding on the EC-dependent interfacial charge behavior. Intensity modulated photocurrent spectroscopy and electrochemical impedance spectroscopy were used to investigate the competition between interfacial charge transfer and recombination, which was found to be dominated by the charge storage capacities of ECs. The combined results indicate that both ECs can store holes and increase the hole density on photoanode surface. It is like a double-edged sword that benefit the multi-hole participated OER, as well as aggravate the SC-EC interfacial charge recombination due to the Coulomb attraction, thus leading to a nonmonotonic PEC performance variation trend with the increasing surface hole density. Co3O4 has low hole storage capacity which brings limited interfacial charge recombination, and thus the increased surface holes can be efficiently utilized for OER to generate enhanced photocurrent. In contrast, CoO has overlarge hole storage capacity that causes severe interfacial charge recombination, which hinders hole transfer to electrolyte for OER. Therefore, the PEC performance of α-Fe2O3 is improved by Co3O4 but decreased by CoO despite the similar electrocatalytic activity of the two ECs. First-principle calculation was conducted to further reveal how the charge storage capacity depends on the EC’s intrinsic property, demonstrating that the larger hole storage capacity of CoO than that of Co3O4 is determined by their Co valence states and original Fermi levels. This study raises up a new strategy to manipulate interfacial charge behavior and the resultant PEC performance by the charge storage capacity of ECs, providing insightful guidance for the interface design in PEC devices.

Keywords: charge storage capacity, electrocatalyst, interfacial charge behavior, photoelectrochemistry, water-splitting

Procedia PDF Downloads 118
377 Language in Court: Ideology, Power and Cognition

Authors: Mehdi Damaliamiri

Abstract:

Undoubtedly, the power of language is hardly a new topic; indeed, the persuasive power of language accompanied by ideology has long been recognized in different aspects of life. The two and a half thousand-year-old Bisitun inscriptions in Iran, proclaiming the victories of the Persian King, Darius, are considered by some historians to have been an early example of the use of propaganda. Added to this, the modern age is the true cradle of fully-fledged ideologies and the ongoing process of centrifugal ideologization. The most visible work on ideology today within the field of linguistics is “Critical Discourse Analysis” (CDA). The focus of CDA is on “uncovering injustice, inequality, taking sides with the powerless and suppressed” and making “mechanisms of manipulation, discrimination, demagogy, and propaganda explicit and transparent.” possible way of relating language to ideology is to propose that ideology and language are inextricably intertwined. From this perspective, language is always ideological, and ideology depends on the language. All language use involves ideology, and so ideology is ubiquitous – in our everyday encounters, as much as in the business of the struggle for power within and between the nation-states and social statuses. At the same time, ideology requires language. Its key characteristics – its power and pervasiveness, its mechanisms for continuity and for change – all come out of the inner organization of language. The two phenomena are homologous: they share the same evolutionary trajectory. To get a more robust portrait of the power and ideology, we need to examine its potential place in the structure, and consider how such structures pattern in terms of the functional elements which organize meanings in the clause. This is based on the belief that all grammatical, including syntactic, knowledge is stored mentally as constructions have become immensely popular. When the structure of the clause is taken into account, the power and ideology have a preference for Complement over Subject and Adjunct. The subject is a central interpersonal element in discourse: it is one of two elements that form the central interactive nub of a proposition. Conceptually, there are countless ways of construing a given event and linguistically, a variety of grammatical devices that are usually available as alternate means of coding a given conception, such as political crime and corruption. In the theory of construal, then, which, like transitivity in Halliday, makes options available, Cognitive Linguistics can offer a cognitive account of ideology in language, where ideology is made possible by the choices a language allows for representing the same material situation in different ways. The possibility of promoting alternative construals of the same reality means that any particular choice in representation is always ideologically constrained or motivated and indicates the perspective and interests of the text-producer.

Keywords: power, ideology, court, discourse

Procedia PDF Downloads 143
376 Fuel Cells Not Only for Cars: Technological Development in Railways

Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz

Abstract:

Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.

Keywords: railway, hydrogen, fuel cells, hybrid vehicles

Procedia PDF Downloads 164
375 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 115
374 Combustion Characteristics of Ionized Fuels for Battery System Safety

Authors: Hyeuk Ju Ko, Eui Ju Lee

Abstract:

Many electronic devices are powered by various rechargeable batteries such as lithium-ion today, but occasionally the batteries undergo thermal runaway and cause fire, explosion, and other hazards. If a battery fire should occur in an electronic device of vehicle and aircraft cabin, it is important to quickly extinguish the fire and cool the batteries to minimize safety risks. Attempts to minimize these risks have been carried out by many researchers but the number of study on the successful extinguishment is limited. Because most rechargeable batteries are operated on the ion state with electron during charge and discharge of electricity, and the reaction of this electrolyte has a big difference with normal combustion. Here, we focused on the effect of ions on reaction stability and pollutant emissions during combustion process. The other importance for understanding ionized fuel combustion could be found in high efficient and environment-friendly combustion technologies, which are used to be operated an extreme condition and hence results in unintended flame instability such as extinction and oscillation. The use of electromagnetic energy and non-equilibrium plasma is one of the way to solve the problems, but the application has been still limited because of lack of excited ion effects in the combustion process. Therefore, the understanding of ion role during combustion might be promised to the energy safety society including the battery safety. In this study, the effects of an ionized fuel on the flame stability and pollutant emissions were experimentally investigated in the hydrocarbon jet diffusion flames. The burner used in this experiment consisted of 7.5 mm diameter tube for fuel and the gaseous fuels were ionized with the ionizer (SUNJE, SPN-11). Methane (99.9% purity) and propane (commercial grade) were used as a fuel and open ambient air was used as an oxidizer. As the performance of ionizer used in the experiment was evaluated at first, ion densities of both propane and methane increased linearly with volume flow rate but the ion density of propane is slightly higher than that of methane. The results show that the overall flame stability and shape such as flame length has no significant difference even in the higher ion concentration. However, the fuel ionization affects to the pollutant emissions such as NOx and soot. NOx and CO emissions measured in post flame region decreased with increasing fuel ionization, especially at high fuel velocity, i.e. high ion density. TGA analysis and morphology of soot by TEM indicates that the fuel ionization makes soot to be matured.

Keywords: battery fires, ionization, jet flames, stability, NOx and soot

Procedia PDF Downloads 166
373 Corpora in Secondary Schools Training Courses for English as a Foreign Language Teachers

Authors: Francesca Perri

Abstract:

This paper describes a proposal for a teachers’ training course, focused on the introduction of corpora in the EFL didactics (English as a foreign language) of some Italian secondary schools. The training course is conceived as a part of a TEDD participant’s five months internship. TEDD (Technologies for Education: diversity and devices) is an advanced course held by the Department of Engineering and Information Technology at the University of Trento, Italy. Its main aim is to train a selected, heterogeneous group of graduates to engage with the complex interdependence between education and technology in modern society. The educational approach draws on a plural coexistence of various theories as well as socio-constructivism, constructionism, project-based learning and connectivism. TEDD educational model stands as the main reference source to the design of a formative course for EFL teachers, drawing on the digitalization of didactics and creation of learning interactive materials for L2 intermediate students. The training course lasts ten hours, organized into five sessions. In the first part (first and second session) a series of guided and semi-guided activities drive participants to familiarize with corpora through the use of a digital tools kit. Then, during the second part, participants are specifically involved in the realization of a ML (Mistakes Laboratory) where they create, develop and share digital activities according to their teaching goals with the use of corpora, supported by the digital facilitator. The training course takes place into an ICT laboratory where the teachers work either individually or in pairs, with a computer connected to a wi-fi connection, while the digital facilitator shares inputs, materials and digital assistance simultaneously on a whiteboard and on a digital platform where participants interact and work together both synchronically and diachronically. The adoption of good ICT practices is a fundamental step to promote the introduction and use of Corpus Linguistics in EFL teaching and learning processes, in fact dealing with corpora not only promotes L2 learners’ critical thinking and orienteering versus wild browsing when they are looking for ready-made translations or language usage samples, but it also entails becoming confident with digital tools and activities. The paper will explain reasons, limits and resources of the pedagogical approach adopted to engage EFL teachers with the use of corpora in their didactics through the promotion of digital practices.

Keywords: digital didactics, education, language learning, teacher training

Procedia PDF Downloads 135
372 Jurisdictional Federalism and Formal Federalism: Levels of Political Centralization on American and Brazilian Models

Authors: Henrique Rangel, Alexandre Fadel, Igor De Lazari, Bianca Neri, Carlos Bolonha

Abstract:

This paper promotes a comparative analysis of American and Brazilian models of federalism assuming their levels of political centralization as main criterion. The central problem faced herein is the Brazilian approach of Unitarian regime. Although the hegemony of federative form after 1989, Brazil had a historical frame of political centralization that remains under the 1988 constitutional regime. Meanwhile, United States framed a federalism in which States absorb significant authorities. The hypothesis holds that the amount of alternative criteria of federalization – which can generate political centralization –, and the way they are upheld on judicial review, are crucial to understand the levels of political centralization achieved in each model. To test this hypothesis, the research is conducted by a methodology temporally delimited to 1994-2014 period. Three paradigmatic precedents of U.S. Supreme Court were selected: United States vs. Morrison (2000), on gender-motivated violence, Gonzales vs. Raich (2005), on medical use of marijuana, and United States vs. Lopez (1995), on firearm possession on scholar zones. These most relevant cases over federalism in the recent activity of Supreme Court indicates a determinant parameter of deliberation: the commerce clause. After observe the criterion used to permit or prohibit the political centralization in America, the Brazilian normative context is presented. In this sense, it is possible to identify the eventual legal treatment these controversies could receive in this Country. The decision-making reveals some deliberative parameters, which characterizes each federative model. At the end of research, the precedents of Rehnquist Court promote a broad revival of federalism debate, establishing the commerce clause as a secure criterion to uphold or not the necessity of centralization – even with decisions considered conservative. Otherwise, the Brazilian federalism solves them controversies upon in a formalist fashion, within numerous and comprehensive – sometimes casuistic too – normative devices, oriented to make an intense centralization. The aim of this work is indicate how jurisdictional federalism found in United States can preserve a consistent model with States robustly autonomous, while Brazil gives preference to normative mechanisms designed to starts from centralization.

Keywords: constitutional design, federalism, U.S. Supreme Court, legislative authority

Procedia PDF Downloads 499
371 Exploring the Success of Live Streaming Commerce in China: A Literature Analysis

Authors: Ming Gao, Matthew Tingchi Liu, Hoi Ngan Loi

Abstract:

Live streaming refers to the video contents generated by broadcasters and shared with viewers in real-time by uploading them to short-video platforms. In recent years, individual KOL broadcasters have successfully made use of live streams to sell a large amount of goods to the consumers. For example, Wei Ya, the Number 1 broadcaster in Taobao Live, sold products worth RMB 2.7 billion (USD 0.38 billion) in 2018. Regarding the success of live streaming commerce (LSC) in China, this study explores the elements of the booming LSC industry and attempts to explain the reasons behind its prosperity. A systematic review of industry reports and academic papers was conducted to summarize the latest findings in this field. And the results of this investigation showed that a live streaming eco-system has been established by the LSC players, namely, the platform, the broadcaster, the product supplier, and the viewer. In this eco-system, all players have complementary advantages and needs, and their close cooperation leads to a win-win situation. For instance, platforms and broadcasters have abundant internet traffic, which needs to be monetized, while product suppliers have mature supply chains and the need of promoting the products. In addition, viewers are attached to the LSC platforms to get product information, bargains, and entertainment. This study highlights the importance of the mass-personal hybrid communication nature of live streaming because its interpersonal communication feature increases consumers’ positive experiences, while its mass media broadcasting feature facilitates product promotion. Another innovative point of this study lies in its inclusion of the special characteristic of Chinese Internet culture - entertainment. The entertaining genres of the live streams created by broadcasters serve as down-to-earth approaches to reach their audiences easily. Further, the nature of video, i.e., the dynamic and salient stimulus, is emphasized in this study. Since video is more engaging, it can attract viewers in a quick and easy way. Meanwhile, the abundant, interesting, high-quality, and free short videos have added “stickiness” to platforms by retaining users and prolonging their staying time on the platforms. In addition, broadcasters’ important characters, such as physical attractiveness, humor, sex appeal, kindness, communication skills, and interactivity, are also identified as important factors that influence consumers’ engagement and purchase intention. In conclusion, all players have their own proper places in this live streaming eco-system, in which they work seamlessly to give full play to their respective advantages, with each player taking what it needs and offering what it has. This has contributed to the success of live streaming commerce in China.

Keywords: broadcasters, communication, entertainment, live streaming commerce, viewers

Procedia PDF Downloads 104
370 A Method and System for Secure Authentication Using One Time QR Code

Authors: Divyans Mahansaria

Abstract:

User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.

Keywords: authentication, QR code, cipher / decipher text, one time password, secret information

Procedia PDF Downloads 251
369 Furniko Flour: An Emblematic Traditional Food of Greek Pontic Cuisine

Authors: A. Keramaris, T. Sawidis, E. Kasapidou, P. Mitlianga

Abstract:

Although the gastronomy of the Greeks of Pontus is highly prominent, it has not received the same level of scientific analysis as another local cuisine of Greece, that of Crete. As a result, we intended to focus our research on Greek Pontic cuisine to shed light on its unique recipes, food products, and, ultimately, its features. The Greeks of Pontus, who lived for a long time in the northern part (Black Sea Region) of contemporary Turkey and now widely inhabit northern Greece, have one of Greece's most distinguished local cuisines. Despite their gastronomy being simple, it features several inspiring delicacies. It's been a century since they immigrated to Greece, yet their gastronomic culture remains a critical component of their collective identity. As a first step toward comprehending Greek Pontic cuisine, it was attempted to investigate the production of one of its most renowned traditional products, furniko flour. In this project, we targeted residents of Western Macedonia, a province in northern Greece with a large population of descendants of Greeks of Pontus who are primarily engaged in agricultural activities. In this quest, we approached a descendant of the Greeks of Pontus who is involved in the production of furniko flour and who consented to show us the entire process of its production as we participated in it. The furniko flour is made from non-hybrid heirloom corn. It is harvested by hand when the moisture content of the seeds is low enough to make them suitable for roasting. Manual harvesting entails removing the cob from the plant and detaching the husks. The harvested cobs are then roasted for 24 hours in a traditional wood oven. The roasted cobs are then collected and stored in sacks. The next step is to extract the seeds, which is accomplished by rubbing the cobs. The seeds should ideally be ground in a traditional stone hand mill. We end up with aromatic and dark golden furniko flour, which is used to cook havitz. Accompanied by the preparation of the furnikoflour, we also recorded the cooking process of the havitz (a porridge-like cornflour dish). A savory delicacy that is simple to prepare and one of the most delightful dishes in Greek Pontic cuisine. According to the research participant, havitzis a highly nutritious dish due to the ingredients of furniko flour. In addition, he argues that preparing havitz is a great way to bring families together, share stories, and revisit fond memories. In conclusion, this study illustrates the traditional preparation of furnikoflour and its use in various traditional recipes as an initial effort to highlight the elements of Pontic Greek cuisine. As a continuation of the current study, it could be the analysis of the chemical components of the furniko flour to evaluate its nutritional content.

Keywords: furniko flour, greek pontic cuisine, havitz, traditional foods

Procedia PDF Downloads 117
368 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 117
367 Modification of Carbon-Based Gas Sensors for Boosting Selectivity

Authors: D. Zhao, Y. Wang, G. Chen

Abstract:

Gas sensors that utilize carbonaceous materials as sensing media offer numerous advantages, making them the preferred choice for constructing chemical sensors over those using other sensing materials. Carbonaceous materials, particularly nano-sized ones like carbon nanotubes (CNTs), provide these sensors with high sensitivity. Additionally, carbon-based sensors possess other advantageous properties that enhance their performance, including high stability, low power consumption for operation, and cost-effectiveness in their construction. These properties make carbon-based sensors ideal for a wide range of applications, especially in miniaturized devices created through MEMS or NEMS technologies. To capitalize on these properties, a group of chemoresistance-type carbon-based gas sensors was developed and tested against various volatile organic compounds (VOCs) and volatile inorganic compounds (VICs). The results demonstrated exceptional sensitivity to both VOCs and VICs, along with the sensor’s long-term stability. However, this broad sensitivity also led to poor selectivity towards specific gases. This project aims at addressing the selectivity issue by modifying the carbon-based sensing materials and enhancing the sensor's specificity to individual gas. Multiple groups of sensors were manufactured and modified using proprietary techniques. To assess their performance, we conducted experiments on representative sensors from each group to detect a range of VOCs and VICs. The VOCs tested included acetone, dimethyl ether, ethanol, formaldehyde, methane, and propane. The VICs comprised carbon monoxide (CO), carbon dioxide (CO2), hydrogen (H2), nitric oxide (NO), and nitrogen dioxide (NO2). The concentrations of the sample gases were all set at 50 parts per million (ppm). Nitrogen (N2) was used as the carrier gas throughout the experiments. The results of the gas sensing experiments are as follows. In Group 1, the sensors exhibited selectivity toward CO2, acetone, NO, and NO2, with NO2 showing the highest response. Group 2 primarily responded to NO2. Group 3 displayed responses to nitrogen oxides, i.e., both NO and NO2, with NO2 slightly surpassing NO in sensitivity. Group 4 demonstrated the highest sensitivity among all the groups toward NO and NO2, with NO2 being more sensitive than NO. In conclusion, by incorporating several modifications using carbon nanotubes (CNTs), sensors can be designed to respond well to NOx gases with great selectivity and without interference from other gases. Because the response levels to NO and NO2 from each group are different, the individual concentration of NO and NO2 can be deduced.

Keywords: gas sensors, carbon, CNT, MEMS/NEMS, VOC, VIC, high selectivity, modification of sensing materials

Procedia PDF Downloads 101
366 Coupling Strategy for Multi-Scale Simulations in Micro-Channels

Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier

Abstract:

With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.

Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling

Procedia PDF Downloads 152
365 Ultra-Wideband Antennas for Ultra-Wideband Communication and Sensing Systems

Authors: Meng Miao, Jeongwoo Han, Cam Nguyen

Abstract:

Ultra-wideband (UWB) time-domain impulse communication and radar systems use ultra-short duration pulses in the sub-nanosecond regime, instead of continuous sinusoidal waves, to transmit information. The pulse directly generates a very wide-band instantaneous signal with various duty cycles depending on specific usages. In UWB systems, the total transmitted power is spread over an extremely wide range of frequencies; the power spectral density is extremely low. This effectively results in extremely small interference to other radio signals while maintains excellent immunity to interference from these signals. UWB devices can therefore work within frequencies already allocated for other radio services, thus helping to maximize this dwindling resource. Therefore, impulse UWB technique is attractive for realizing high-data-rate, short-range communications, ground penetrating radar (GPR), and military radar with relatively low emission power levels. UWB antennas are the key element dictating the transmitted and received pulse shape and amplitude in both time and frequency domain. They should have good impulse response with minimal distortion. To facilitate integration with transmitters and receivers employing microwave integrated circuits, UWB antennas enabling direct integration are preferred. We present the development of two UWB antennas operating from 3.1 to 10.6 GHz and 0.3-6 GHz for UWB systems that provide direct integration with microwave integrated circuits. The operation of these antennas is based on the principle of wave propagation on a non-uniform transmission line. Time-domain EM simulation is conducted to optimize the antenna structures to minimize reflections occurring at the open-end transition. Calculated and measured results of these UWB antennas are presented in both frequency and time domains. The antennas have good time-domain responses. They can transmit and receive pulses effectively with minimum distortion, little ringing, and small reflection, clearly demonstrating the signal fidelity of the antennas in reproducing the waveform of UWB signals which is critical for UWB sensors and communication systems. Good performance together with seamless microwave integrated-circuit integration makes these antennas good candidates not only for UWB applications but also for integration with printed-circuit UWB transmitters and receivers.

Keywords: antennas, ultra-wideband, UWB, UWB communication systems, UWB radar systems

Procedia PDF Downloads 218
364 TiO₂ Nanotube Array Based Selective Vapor Sensors for Breath Analysis

Authors: Arnab Hazra

Abstract:

Breath analysis is a quick, noninvasive and inexpensive technique for disease diagnosis can be used on people of all ages without any risk. Only a limited number of volatile organic compounds (VOCs) can be associated with the occurrence of specific diseases. These VOCs can be considered as disease markers or breath markers. Selective detection with specific concentration of breath marker in exhaled human breath is required to detect a particular disease. For example, acetone (C₃H₆O), ethanol (C₂H₅OH), ethane (C₂H₆) etc. are the breath markers and abnormal concentrations of these VOCs in exhaled human breath indicates the diseases like diabetes mellitus, renal failure, breast cancer respectively. Nanomaterial-based vapor sensors are inexpensive, small and potential candidate for the detection of breath markers. In practical measurement, selectivity is the most crucial issue where trace detection of breath marker is needed to identify accurately in the presence of several interfering vapors and gases. Current article concerns a novel technique for selective and lower ppb level detection of breath markers at very low temperature based on TiO₂ nanotube array based vapor sensor devices. Highly ordered and oriented TiO₂ nanotube array was synthesized by electrochemical anodization of high purity tatinium (Ti) foil. 0.5 wt% NH₄F, ethylene glycol and 10 vol% H₂O was used as the electrolyte and anodization was carried out for 90 min with 40 V DC potential. Au/TiO₂ Nanotube/Ti, sandwich type sensor device was fabricated for the selective detection of VOCs in low concentration range. Initially, sensor was characterized where resistive and capacitive change of the sensor was recorded within the valid concentration range for individual breath markers (or organic vapors). Sensor resistance was decreased and sensor capacitance was increased with the increase of vapor concentration. Now, the ratio of resistive slope (mR) and capacitive slope (mC) provided a concentration independent constant term (M) for a particular vapor. For the detection of unknown vapor, ratio of resistive change and capacitive change at any concentration was same to the previously calculated constant term (M). After successful identification of the target vapor, concentration was calculated from the straight line behavior of resistance as a function of concentration. Current technique is suitable for the detection of particular vapor from a mixture of other interfering vapors.

Keywords: breath marker, vapor sensors, selective detection, TiO₂ nanotube array

Procedia PDF Downloads 138
363 Establishing the Legality of Terraforming under the Outer Space Treaty

Authors: Bholenath

Abstract:

Ever since Elon Musk revealed his plan to terraform Mars on national television in 2015, the debate regarding the legality of such an activity under the current Outer Space Treaty regime is gaining momentum. Terraforming means to alter or transform the atmosphere of another planet to have the characteristics of landscapes on Earth. Musk’s plan is to alter the entire environment of Mars so as to make it habitable for humans. He has long been an advocate of colonizing Mars, and in order to make humans an interplanetary species; he wants to detonate thermonuclear devices over the poles of Mars. For a common man, it seems to be a fascinating endeavor, but for space lawyers, it poses new and fascinating legal questions. Some of the questions which arise are whether the use of nuclear weapons on celestial bodies is permitted under the Outer Space Treaty? Whether such an alteration of the celestial environment would fall within the scope of the term 'harmful contamination' under Article IX of the treaty? Whether such an activity which would put an entire planet under the control of a private company can be permitted under the treaty? Whether such terraforming of Mars would amount to its appropriation? Whether such an activity would be in the 'benefit and interests of all countries'? This paper will be attempt to examine and elucidate upon these legal questions. Space is one such domain where the law should precede man. The paper follows the approach that the de lege lata is not capable of prohibiting the terraforming of Mars. Outer Space Treaty provides the freedoms of space and prescribes certain restrictions on those freedoms as well. The author shall examine the provisions such as Article I, II, IV, and IX of the Outer Space Treaty in order to establish the legality of terraforming activity. The author shall establish how such activity is peaceful use of the celestial body, is in the benefit and interests of all countries, and does neither qualify as national appropriation of the celestial body nor as its harmful contamination. The author shall divide the paper into three chapters. The first chapter would be about the general introduction of the problem, the analysis of Elon Musk’s plan to terraform Mars, and the need to study terraforming from the lens of the Outer Space Treaty. In the second chapter, the author shall attempt to establish the legality of the terraforming activity under the provisions of the Outer Space Treaty. In this vein, the author shall put forth the counter interpretations and the arguments which may be formulated against the lawfulness of terraforming. The author shall show as to why the counter interpretations establishing the unlawfulness of terraforming should not be accepted, and in doing so, the author shall provide the interpretations that should prevail and ultimately establishes the legality of terraforming activity under the treaty. In the third chapter, the author shall draw relevant conclusions and give suggestions.

Keywords: appropriation, harmful contamination, peaceful, terraforming

Procedia PDF Downloads 131
362 Modeling of an Insulin Mircopump

Authors: Ahmed Slami, Med El Amine Brixi Nigassa, Nassima Labdelli, Sofiane Soulimane, Arnaud Pothier

Abstract:

Many people suffer from diabetes, a disease marked by abnormal levels of sugar in the blood; 285 million people have diabetes, 6.6% of the world adult population (in 2010), according to the International Diabetes Federation. Insulin medicament is invented to be injected into the body. Generally, the injection requires the patient to do it manually. However, in many cases he will be unable to inject the drug, saw that among the side effects of hyperglycemia is the weakness of the whole body. The researchers designed a medical device that injects insulin too autonomously by using micro-pumps. Many micro-pumps of concepts have been investigated during the last two decades for injecting molecules in blood or in the body. However, all these micro-pumps are intended for slow infusion of drug (injection of few microliters by minute). Now, the challenge is to develop micro-pumps for fast injections (1 microliter in 10 seconds) with accuracy of the order of microliter. Recently, studies have shown that only piezoelectric actuators can achieve this performance, knowing that few systems at the microscopic level were presented. These reasons lead us to design new smart microsystems injection drugs. Therefore, many technological advances are still to achieve the improvement of materials to their uses, while going through their characterization and modeling action mechanisms themselves. Moreover, it remains to study the integration of the piezoelectric micro-pump in the microfluidic platform features to explore and evaluate the performance of these new micro devices. In this work, we propose a new micro-pump model based on piezoelectric actuation with a new design. Here, we use a finite element model with Comsol software. Our device is composed of two pumping chambers, two diaphragms and two actuators (piezoelectric disks). The latter parts will apply a mechanical force on the membrane in a periodic manner. The membrane deformation allows the fluid pumping, the suction and discharge of the liquid. In this study, we present the modeling results as function as device geometry properties, films thickness, and materials properties. Here, we demonstrate that we can achieve fast injection. The results of these simulations will provide quantitative performance of our micro-pumps. Concern the spatial actuation, fluid rate and allows optimization of the fabrication process in terms of materials and integration steps.

Keywords: COMSOL software, piezoelectric, micro-pump, microfluidic

Procedia PDF Downloads 328
361 Finite Element Modelling of Mechanical Connector in Steel Helical Piles

Authors: Ramon Omar Rosales-Espinoza

Abstract:

Pile-to-pile mechanical connections are used if the depth of the soil layers with sufficient bearing strength exceeds the original (“leading”) pile length, with the additional pile segment being termed “extension” pile. Mechanical connectors permit a safe transmission of forces from leading to extension pile while meeting strength and serviceability requirements. Common types of connectors consist of an assembly of sleeve-type external couplers, bolts, pins, and other mechanical interlock devices that ensure the transmission of compressive, tensile, torsional and bending stresses between leading and extension pile segments. While welded connections allow for a relatively simple structural design, mechanical connections are advantageous over welded connections because they lead to shorter installation times and significant cost reductions since specialized workmanship and inspection activities are not required. However, common practices followed to design mechanical connectors neglect important aspects of the assembly response, such as stress concentration around pin/bolt holes, torsional stresses from the installation process, and interaction between the forces at the installation (torsion), service (compression/tension-bending), and removal stages (torsion). This translates into potentially unsatisfactory designs in terms of the ultimate and service limit states, exhibiting either reduced strength or excessive deformations. In this study, the experimental response under compressive forces of a type of mechanical connector is presented, in terms of strength, deformation and failure modes. The tests revealed that the type of connector used can safely transmit forces from pile to pile. Using the results from the compressive tests, an analysis model was developed using the finite element (FE) method to study the interaction of forces under installation and service stages of a typical mechanical connector. The response of the analysis model is used to identify potential areas for design optimization, including size, gap between leading and extension piles, number of pin/bolts, hole sizes, and material properties. The results show the design of mechanical connectors should take into account the interaction of forces present at every stage of their life cycle, and that the torsional stresses occurring during installation are critical for the safety of the assembly.

Keywords: piles, FEA, steel, mechanical connector

Procedia PDF Downloads 245
360 Preparation of IPNs and Effect of Swift Heavy Ions Irradiation on their Physico-Chemical Properties

Authors: B. S Kaith, K. Sharma, V. Kumar, S. Kalia

Abstract:

Superabsorbent are three-dimensional networks of linear or branched polymeric chains which can uptake large volume of biological fluids. The ability is due to the presence of functional groups like –NH2, -COOH and –OH. Such cross-linked products based on natural materials, such as cellulose, starch, dextran, gum and chitosan, because of their easy availability, low production cost, non-toxicity and biodegradability have attracted the attention of Scientists and Technologists all over the world. Since natural polymers have better biocompatibility and are non-toxic than most synthetic one, therefore, such materials can be applied in the preparation of controlled drug delivery devices, biosensors, tissue engineering, contact lenses, soil conditioning, removal of heavy metal ions and dyes. Gums are natural potential antioxidants and are used as food additives. They have excellent properties like high solubility, pH stability, non-toxicity and gelling characteristics. Till date lot of methods have been applied for the synthesis and modifications of cross-linked materials with improved properties suitable for different applications. It is well known that ion beam irradiation can play a crucial role to synthesize, modify, crosslink or degrade polymeric materials. High energetic heavy ions irradiation on polymer film induces significant changes like chain scission, cross-linking, structural changes, amorphization and degradation in bulk. Various researchers reported the effects of low and heavy ion irradiation on the properties of polymeric materials and observed significant improvement in optical, electrical, chemical, thermal and dielectric properties. Moreover, modifications induced in the materials mainly depend on the structure, the ion beam parameters like energy, linear energy transfer, fluence, mass, charge and the nature of the target material. Ion-beam irradiation is a useful technique for improving the surface properties of biodegradable polymers without missing the bulk properties. Therefore, a considerable interest has been grown to study the effects of SHIs irradiation on the properties of synthesized semi-IPNs and IPNs. The present work deals with the preparation of semi-IPNs and IPNs and impact of SHI like O7+ and Ni9+ irradiation on optical, chemical, structural, morphological and thermal properties along with impact on different applications. The results have been discussed on the basis of Linear Energy Transfer (LET) of the ions.

Keywords: adsorbent, gel, IPNs, semi-IPNs

Procedia PDF Downloads 352
359 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition

Authors: Karolina Mazur, Stanislaw Kuciel

Abstract:

Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.

Keywords: biocomposites, PLA, PHA, storage

Procedia PDF Downloads 245
358 Influence of High-Resolution Satellites Attitude Parameters on Image Quality

Authors: Walid Wahballah, Taher Bazan, Fawzy Eltohamy

Abstract:

One of the important functions of the satellite attitude control system is to provide the required pointing accuracy and attitude stability for optical remote sensing satellites to achieve good image quality. Although offering noise reduction and increased sensitivity, time delay and integration (TDI) charge coupled devices (CCDs) utilized in high-resolution satellites (HRS) are prone to introduce large amounts of pixel smear due to the instability of the line of sight. During on-orbit imaging, as a result of the Earth’s rotation and the satellite platform instability, the moving direction of the TDI-CCD linear array and the imaging direction of the camera become different. The speed of the image moving on the image plane (focal plane) represents the image motion velocity whereas the angle between the two directions is known as the drift angle (β). The drift angle occurs due to the rotation of the earth around its axis during satellite imaging; affecting the geometric accuracy and, consequently, causing image quality degradation. Therefore, the image motion velocity vector and the drift angle are two important factors used in the assessment of the image quality of TDI-CCD based optical remote sensing satellites. A model for estimating the image motion velocity and the drift angle in HRS is derived. The six satellite attitude control parameters represented in the derived model are the (roll angle φ, pitch angle θ, yaw angle ψ, roll angular velocity φ֗, pitch angular velocity θ֗ and yaw angular velocity ψ֗ ). The influence of these attitude parameters on the image quality is analyzed by establishing a relationship between the image motion velocity vector, drift angle and the six satellite attitude parameters. The influence of the satellite attitude parameters on the image quality is assessed by the presented model in terms of modulation transfer function (MTF) in both cross- and along-track directions. Three different cases representing the effect of pointing accuracy (φ, θ, ψ) bias are considered using four different sets of pointing accuracy typical values, while the satellite attitude stability parameters are ideal. In the same manner, the influence of satellite attitude stability (φ֗, θ֗, ψ֗) on image quality is also analysed for ideal pointing accuracy parameters. The results reveal that cross-track image quality is influenced seriously by the yaw angle bias and the roll angular velocity bias, while along-track image quality is influenced only by the pitch angular velocity bias.

Keywords: high-resolution satellites, pointing accuracy, attitude stability, TDI-CCD, smear, MTF

Procedia PDF Downloads 380
357 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 86
356 The Compliance of Safe-Work Behaviors among Undergraduate Nursing Students with Different Clinical Experiences

Authors: K. C. Wong, K. L. Siu, S. N. Ng, K. N. Yip, Y. Y. Yuen, K. W. Lee, K. W. Wong, C. C. Li, H. P. Lam

Abstract:

Background: Occupational injuries among nursing profession were found related to repeated bedside nursing care, such as transfer, lifting and manual handling patients from previous studies. Likewise, undergraduate nursing students are also exposed to potential safety hazard due to their similar work nature of registered nurses. Especially, those students who worked as Temporary undergraduate nursing students (TUNS) which is a part-time clinical job in hospitals in Hong Kong who mainly assisted in providing bedside cares appeared to at high risk of work-related injuries. Several studies suggested the level of compliance with safe work behaviors was highly associated with work-related injuries. Yet, it had been limitedly studied among nursing students. This study was conducted to assess and compare the compliance with safe work behaviors and the levels of awareness of different workplace safety issues between undergraduate nursing students with or without TUNS experiences. Methods: This is a quantitative descriptive study using convenience sampling. 362 undergraduate nursing students in Hong Kong were recruited. The Safe Work Behavior relating to Patient Handling (SWB-PH) was used to assess their compliance of safe-work behaviors and the level of awareness of different workplace safety issues. Results: The results showed that most of the participants (n=250, 69.1%) who were working as TUNS. However, students who worked as TUNS had significantly lower safe-work behaviors compliance (mean SWB-PH score = 3.64±0.54) than those did not worked as TUNS (SWB-PH score=4.21±0.54) (p<0.001). Particularly, these students had higher awareness to seek help and use assistive devices but lower awareness of workplace safety issues and awareness of proper work posture than students without TUNS experiences. The students with TUNS experiences had higher engagement in help-seeking behaviors might be possibly explained by their richer clinical experiences which served as a facilitator to seek help from clinical staff whenever necessary. Besides, these experienced students were more likely to bear risks for occupational injuries and worked alone when no available aid which might be related to the busy working environment, heightened work pressures and high expectations of TUNS. Eventually, students who worked as TUNS might target on completing the assigned tasks and gradually neglecting the occupational safety. Conclusion: The findings contributed to an understanding of the level of compliance with safe work behaviors among nursing students with different clinical experiences. The results might guide the modification of current safety protocols and the introduction of multiple clinical training courses to improve nursing student’s engagement in safe work behaviors.

Keywords: Occupational safety, Safety compliance, Safe-work behavior, Nursing students

Procedia PDF Downloads 121
355 Development of 3D Printed Natural Fiber Reinforced Composite Scaffolds for Maxillofacial Reconstruction

Authors: Sri Sai Ramya Bojedla, Falguni Pati

Abstract:

Nature provides the best of solutions to humans. One such incredible gift to regenerative medicine is silk. The literature has publicized a long appreciation for silk owing to its incredible physical and biological assets. Its bioactive nature, unique mechanical strength, and processing flexibility make us curious to explore further to apply it in the clinics for the welfare of mankind. In this study, Antheraea mylitta and Bombyx mori silk fibroin microfibers are developed by two economical and straightforward steps via degumming and hydrolysis for the first time, and a bioactive composite is manufactured by mixing silk fibroin microfibers at various concentrations with polycaprolactone (PCL), a biocompatible, aliphatic semi-crystalline synthetic polymer. Reconstructive surgery in any part of the body except for the maxillofacial region deals with replacing its function. But answering both the aesthetics and function is of utmost importance when it comes to facial reconstruction as it plays a critical role in the psychological and social well-being of the patient. The main concern in developing adequate bone graft substitutes or a scaffold is the noteworthy variation in each patient's bone anatomy. Additionally, the anatomical shape and size will vary based on the type of defect. The advent of additive manufacturing (AM) or 3D printing techniques to bone tissue engineering has facilitated overcoming many of the restraints of conventional fabrication techniques. The acquired patient's CT data is converted into a stereolithographic (STL)-file which is further utilized by the 3D printer to create a 3D scaffold structure in an interconnected layer-by-layer fashion. This study aims to address the limitations of currently available materials and fabrication technologies and develop a customized biomaterial implant via 3D printing technology to reconstruct complex form, function, and aesthetics of the facial anatomy. These composite scaffolds underwent structural and mechanical characterization. Atomic force microscopic (AFM) and field emission scanning electron microscopic (FESEM) images showed the uniform dispersion of the silk fibroin microfibers in the PCL matrix. With the addition of silk, there is improvement in the compressive strength of the hybrid scaffolds. The scaffolds with Antheraea mylitta silk revealed higher compressive modulus than that of Bombyx mori silk. The above results of PCL-silk scaffolds strongly recommend their utilization in bone regenerative applications. Successful completion of this research will provide a great weapon in the maxillofacial reconstructive armamentarium.

Keywords: compressive modulus, 3d printing, maxillofacial reconstruction, natural fiber reinforced composites, silk fibroin microfibers

Procedia PDF Downloads 167
354 Refractory Cardiac Arrest: Do We Go beyond, Do We Increase the Organ Donation Pool or Both?

Authors: Ortega Ivan, De La Plaza Edurne

Abstract:

Background: Spain and other European countries have implemented Uncontrolled Donation after Cardiac Death (uDCD) programs. After 15 years of experience in Spain, many things have changed. Recent evidence and technical breakthroughs achieved in resuscitation are relevant for uDCD programs and raise some ethical concerns related to these protocols. Aim: To rethink current uDCD programs in the light of recent evidence on available therapeutic procedures applicable to victims of out-of-hospital cardiac arrest (OHCA). To address the following question: What is the current standard of treatment owed to victims of OHCA before including them in an uDCD protocol? Materials and Methods: Review of the scientific and ethical literature related to both uDCD programs and innovative resuscitation techniques. Results: 1) The standard of treatment received and the chances of survival of victims of OHCA depend on whether they are classified as Non-Heart Beating Patients (NHBP) or Non-Heart-Beating-Donors (NHBD). 2) Recent studies suggest that NHBPs are likely to survive, with good quality of life, if one or more of the following interventions are performed while ongoing CPR -guided by suspected or known cause of OHCA- is maintained: a) direct access to a Cath Lab-H24 or/and to extra-corporeal life support (ECLS); b) transfer in induced hypothermia from the Emergency Medical Service (EMS) to the ICU; c) thrombolysis treatment; d) mobile extra-corporeal membrane oxygenation (mini ECMO) instituted as a bridge to ICU ECLS devices. 3) Victims of OHCA who cannot benefit from any of these therapies should be considered as NHBDs. Conclusion: Current uDCD protocols do not take into account recent improvements in resuscitation and need to be adapted. Operational criteria to distinguish NHBDs from NHBP should seek a balance between the technical imperative (to do whatever is possible), considerations about expected survival with quality of life, and distributive justice (costs/benefits). Uncontrolled DCD protocols can be performed in a way that does not hamper the legitimate interests of patients, potential organ donors, their families, the organ recipients, and the health professionals involved in these processes. Families of NHBDs’ should receive information which conforms to the ethical principles of respect of autonomy and transparency.

Keywords: uncontrolled donation after cardiac death resuscitation, refractory cardiac arrest, out of hospital cardiac, arrest ethics

Procedia PDF Downloads 216