Search results for: radiation devices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3701

Search results for: radiation devices

341 The Illegal Architecture of Apartheid in Palestine

Authors: Hala Barakat

Abstract:

Architecture plays a crucial role in the colonization and organization of spaces, as well as the preservation of cultures and history. As a result of 70 years of occupation, Palestinian land, culture, and history are endangered today. The government of Israel has used architecture to strangulate Palestinians out and seize their land. The occupation has managed to fragment the West Bank and cause sensible scars on the landscape by creating obstacles, barriers, watchtowers, checkpoints, walls, apartheid roads, border devices, and illegal settlements to unjustly claim land from its indigenous population. The apartheid architecture has divided the Palestinian social and urban fabric into pieces, similarly to the Bantustans. The architectural techniques and methods used by the occupation are evidence of prejudice, and while the illegal settlements remain to be condemned by the United Nations, little is being done to officially end this apartheid. Illegal settlements range in scale from individual units to established cities and house more than 60,000 Israeli settlers that immigrated from all over Europe and the United States. Often architecture by Israel is being directed towards expressing ideologies and serving as evidence of its political agenda. More than 78% of what was granted to Palestine after the development of the Green Line in 1948 is under Israeli occupation today. This project aims to map the illegal architecture as a criticism of governmental agendas in the West Bank and Historic Palestinian land. The paper will also discuss the resistance to the newly developed plan for the last Arab village in Jerusalem, Lifta. The illegal architecture has isolated Palestinians from each other and installed obstacles to control their movement. The architecture of occupation has no ethical or humane logic but rather entirely political, administrative, and it should not be left for the silenced architecture to tell the story. Architecture is not being used as a connecting device but rather a way to implement political injustice and spatial oppression. By narrating stories of the architecture of occupation, we can highlight the spatial injustice of the complex apartheid infrastructure. The Israeli government has managed to intoxicate architecture to serve as a divider between cultural groups, allowing the unlawful and unethical architecture to define its culture and values. As architects and designers, the roles we play in the development of illegal settlements must align with the spatial ethics we practice. Most importantly, our profession is not performing architecturally when we design a house with a particular roof color to ensure it would not be mistaken with a Palestinian house and be attacked accidentally.

Keywords: apartheid, illegal architecture, occupation, politics

Procedia PDF Downloads 144
340 Development of mHealth Information in Community Based on Geographical Information: A Case Study from Saraphi District, Chiang Mai, Thailand

Authors: Waraporn Boonchieng, Ekkarat Boonchieng, Wilawan Senaratana, Jaras Singkaew

Abstract:

Geographical information system (GIS) is a designated system widely used for collecting and analyzing geographical data. Since the introduction of ultra-mobile, 'smart' devices, investigators, clinicians, and even the general public have had powerful new tools for collecting, uploading and accessing information in the field. Epidemiology paired with GIS will increase the efficacy of preventive health care services. The objective of this study is to apply GPS location services that are available on the common mobile device with district health systems, storing data on our private cloud system. The mobile application has been developed for use on iOS, Android, and web-based platforms. The system consists of two parts of district health information, including recorded resident data forms and individual health recorded data forms, which were developed and approved by opinion sharing and public hearing. The application's graphical user interface was developed using HTML5 and PHP with MySQL as a database management system (DBMS). The reporting module of the developed software displays data in a variety of views, from traditional tables to various types of high-resolution, layered graphics, incorporating map location information with street views from Google Maps. Multi-extension exporting is also supported, utilizing standard platforms such as PDF, PNG, JPG, and XLS. The data were collected in the database beginning in March 2013, by district health volunteers and district youth volunteers who had completed the application training program. District health information consisted of patients’ household coordinates, individual health data, social and economic information. This was combined with Google Street View data, collected in March 2014. Studied groups consisted of 16,085 (67.87%) and 47,811 (59.87%) of the total 23,701 households and 79,855 people were collected by the system respectively, in Saraphi district, Chiang Mai Province. The report generated from the system has had a major benefit directly to the Saraphi District Hospital. Healthcare providers are able to use the basic health data to provide a specific home health care service and also to create health promotion activities according to medical needs of the people in the community.

Keywords: health, public health, GIS, geographic information system

Procedia PDF Downloads 320
339 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health

Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon

Abstract:

Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.

Keywords: behavioural change, framework, health, nudging, sustainability

Procedia PDF Downloads 209
338 Development of Solar Poly House Tunnel Dryer (STD) for Medicinal Plants

Authors: N. C. Shahi, Anupama Singh, E. Kate

Abstract:

Drying is practiced to enhance the storage life, to minimize losses during storage, and to reduce transportation costs of agricultural products. Drying processes range from open sun drying to industrial drying. In most of the developing countries, use of fossil fuels for drying of agricultural products has not been practically feasible due to unaffordable costs to majority of the farmers. On the other hand, traditional open sun drying practiced on a large scale in the rural areas of the developing countries suffers from high product losses due to inadequate drying, fungal growth, encroachment of insects, birds and rodents, etc. To overcome these problems a middle technology dryer having low cost need to be developed for farmers. In case of mechanical dryers, the heated air is the main driving force for removal of moisture. The air is heated either electrically or by burning wood, coal, natural gas etc. using heaters. But, all these common sources have finite supplies. The lifetime is estimated to range from 15 years for a natural gas to nearly 250 years for coal. So, mankind must turn towards its safe and reliable utilization and may have undesirable side effects. The mechanical drying involves higher cost of drying and open sun drying deteriorates the quality. The solar tunnel dryer is one of promising option for drying various agricultural and agro-industrial products on large scale. The advantage of Solar tunnel dryer is its relatively cheaper cost of construction and operation. Although many solar dryers have been developed, still there is a scope of modification in them. Therefore, an attempt was made to develop Solar tunnel dryer and test its performance using highly perishable commodity i.e. leafy vegetables (spinach). The effect of air velocity, loading density and shade net on performance parameters namely, collector efficiency, drying efficiency, overall efficiency of dryer and specific heat energy consumption were also studied. Thus, the need for an intermediate level technology was realized and an effort was made to develop a small scale Solar Tunnel Dryer . A dryer consisted of base frame, semi cylindrical drying chamber, solar collector and absorber, air distribution system with chimney and auxiliary heating system, and wheels for its mobility were the main functional components. Drying of fenugreek was carried out to analyze the performance of the dryer. The Solar Tunnel Dryer temperature was maintained using the auxiliary heating system. The ambient temperature was in the range of 12-33oC. The relative humidity was found inside and outside the Solar Tunnel Dryer in the range of 21-75% and 35-79%, respectively. The solar radiation was recorded in the range of 350-780W/m2 during the experimental period. Studies revealed that total drying time was in range of 230 to 420 min. The drying time in Solar Tunnel Dryer was considerably reduced by 67% as compared to sun drying. The collector efficiency, drying efficiency, overall efficiency and specific heat consumption were determined and were found to be in the range of 50.06- 38.71%, 15.53-24.72%, 4.25 to 13.34% and 1897.54-3241.36 kJ/kg, respectively.

Keywords: overall efficiency, solar tunnel dryer, specific heat consumption, sun drying

Procedia PDF Downloads 302
337 A Five-Year Experience of Intensity Modulated Radiotherapy in Nasopharyngeal Carcinomas in Tunisia

Authors: Omar Nouri, Wafa Mnejja, Fatma Dhouib, Syrine Zouari, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Leila Farhat, Nejla Fourati, Jamel Daoud

Abstract:

Purpose and Objective: Intensity modulated radiation (IMRT) technique, associated with induction chemotherapy (IC) and/or concomitant chemotherapy (CC), is actually the recommended treatment modality for nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the therapeutic results and the patterns of relapse with this treatment protocol. Material and methods: A retrospective monocentric study of 145 patients with NPC treated between June 2016 and July 2021. All patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. The high-risk volume dose was 66.5 Gy in children. Survival analysis was performed according to the Kaplan-Meier method, and the Log-rank test was used to compare factors that may influence survival. Results: Median age was 48 years (11-80) with a sex ratio of 2.9. One hundred-twenty tumors (82.7%) were classified as stages III-IV according to the 2017 UICC TNM classification. Ten patients (6.9%) were metastatic at diagnosis. One hundred-thirty-five patient (93.1%) received IC, 104 of which (77%) were TPF-based (taxanes, cisplatin and 5 fluoro-uracil). One hundred-thirty-eight patient (95.2%) received CC, mostly cisplatin in 134 cases (97%). After a median follow-up of 50 months [22-82], 46 patients (31.7%) had a relapse: 12 (8.2%) experienced local and/or regional relapse after a median of 18 months [6-43], 29 (20%) experienced distant relapse after a median of 9 months [2-24] and 5 patients (3.4%) had both. Thirty-five patients (24.1%) died, including 5 (3.4%) from a cause other than their cancer. Three-year overall survival (OS), cancer specific survival, disease free survival, metastasis free survival and loco-regional free survival were respectively 78.1%, 81.3%, 67.8%, 74.5% and 88.1%. Anatomo-clinic factors predicting OS were age > 50 years (88.7 vs. 70.5%; p=0.004), diabetes history (81.2 vs. 66.7%; p=0.027), UICC N classification (100 vs. 95 vs. 77.5 vs. 68.8% respectively for N0, N1, N2 and N3; p=0.008), the practice of a lymph node biopsy (84.2 vs. 57%; p=0.05), and UICC TNM stages III-IV (93.8 vs. 73.6% respectively for stage I-II vs. III-IV; p=0.044). Therapeutic factors predicting OS were a number of CC courses (less than 4 courses: 65.8 vs. 86%; p=0.03, less than 5 courses: 71.5 vs. 89%; p=0.041), a weight loss > 10% during treatment (84.1 vs. 60.9%; p=0.021) and a total cumulative cisplatin dose, including IC and CC, < 380 mg/m² (64.4 vs. 87.6%; p=0.003). Radiotherapy delay and total duration did not significantly affect OS. No grade 3-4 late side effects were noted in the evaluable 127 patients (87.6%). The most common toxicity was dry mouth which was grade 2 in 47 cases (37%) and grade 1 in 55 cases (43.3%).Conclusion: IMRT for nasopharyngeal carcinoma granted a high loco-regional control rate for patients during the last five years. However, distant relapses remain frequent and conditionate the prognosis. We identified many anatomo-clinic and therapeutic prognosis factors. Therefore, high-risk patients require a more aggressive therapeutic approach, such as radiotherapy dose escalation or adding adjuvant chemotherapy.

Keywords: therapeutic results, prognostic factors, intensity-modulated radiotherapy, nasopharyngeal carcinoma

Procedia PDF Downloads 52
336 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 71
335 Use of a Novel Intermittent Compression Shoe in Reducing Lower Limb Venous Stasis

Authors: Hansraj Riteesh Bookun, Cassandra Monique Hidajat

Abstract:

This pilot study investigated the efficacy of a newly designed shoe which will act as an intermittent pneumatic compression device to augment venous flow in the lower limb. The aim was to assess the degree with which a wearable intermittent compression device can increase the venous flow in the popliteal vein. Background: Deep venous thrombosis and chronic venous insufficiency are relatively common problems with significant morbidity and mortality. While mechanical and chemical thromboprophylaxis measures are in place in hospital environments (in the form of TED stockings, intermittent pneumatic compression devices, analgesia, antiplatelet and anticoagulant agents), there are limited options in a community setting. Additionally, many individuals are poorly tolerant of graduated compression stockings due to the difficulty in putting them on, their constant tightness and increased associated discomfort in warm weather. These factors may hinder the management of their chronic venous insufficiency. Method: The device is lightweight, easy to wear and comfortable, with a self-contained power source. It features a Bluetooth transmitter and can be controlled with a smartphone. It is externally almost indistinguishable from a normal shoe. During activation, two bladders are inflated -one overlying the metatarsal heads and the second at the pedal arch. The resulting cyclical increase in pressure squeezes blood into the deep venous system. This will decrease periods of stasis and potentially reduce the risk of deep venous thrombosis. The shoe was fitted to 2 healthy participants and the peak systolic velocity of flow in the popliteal vein was measured during and prior to intermittent compression phases. Assessments of total flow volume were also performed. All haemodynamic assessments were performed with ultrasound by a licensed sonographer. Results: Mean peak systolic velocity of 3.5 cm/s with standard deviation of 1.3 cm/s were obtained. There was a three fold increase in mean peak systolic velocity and five fold increase in total flow volume. Conclusion: The device augments venous flow in the leg significantly. This may contribute to lowered thromboembolic risk during periods of prolonged travel or immobility. This device may also serve as an adjunct in the treatment of chronic venous insufficiency. The study will be replicated on a larger scale in a multi—centre trial.

Keywords: venous, intermittent compression, shoe, wearable device

Procedia PDF Downloads 179
334 Modulating Photoelectrochemical Water-Splitting Activity by Charge-Storage Capacity of Electrocatalysts

Authors: Yawen Dai, Ping Cheng, Jian Ru Gong

Abstract:

Photoelctrochemical (PEC) water splitting using semiconductors (SCs) provides a convenient way to convert sustainable but intermittent solar energy into clean hydrogen energy, and it has been regarded as one of most promising technology to solve the energy crisis and environmental pollution in modern society. However, the record energy conversion efficiency of a PEC cell (~3%) is still far lower than the commercialization requirement (~10%). The sluggish kinetics of oxygen evolution reaction (OER) half reaction on photoanodes is a significant limiting factor of the PEC device efficiency, and electrocatalysts (ECs) are always deposited on SCs to accelerate the hole injection for OER. However, an active EC cannot guarantee enhanced PEC performance, since the newly emerged SC-EC interface complicates the interfacial charge behavior. Herein, α-Fe2O3 photoanodes coated with Co3O4 and CoO ECs are taken as the model system to glean fundamental understanding on the EC-dependent interfacial charge behavior. Intensity modulated photocurrent spectroscopy and electrochemical impedance spectroscopy were used to investigate the competition between interfacial charge transfer and recombination, which was found to be dominated by the charge storage capacities of ECs. The combined results indicate that both ECs can store holes and increase the hole density on photoanode surface. It is like a double-edged sword that benefit the multi-hole participated OER, as well as aggravate the SC-EC interfacial charge recombination due to the Coulomb attraction, thus leading to a nonmonotonic PEC performance variation trend with the increasing surface hole density. Co3O4 has low hole storage capacity which brings limited interfacial charge recombination, and thus the increased surface holes can be efficiently utilized for OER to generate enhanced photocurrent. In contrast, CoO has overlarge hole storage capacity that causes severe interfacial charge recombination, which hinders hole transfer to electrolyte for OER. Therefore, the PEC performance of α-Fe2O3 is improved by Co3O4 but decreased by CoO despite the similar electrocatalytic activity of the two ECs. First-principle calculation was conducted to further reveal how the charge storage capacity depends on the EC’s intrinsic property, demonstrating that the larger hole storage capacity of CoO than that of Co3O4 is determined by their Co valence states and original Fermi levels. This study raises up a new strategy to manipulate interfacial charge behavior and the resultant PEC performance by the charge storage capacity of ECs, providing insightful guidance for the interface design in PEC devices.

Keywords: charge storage capacity, electrocatalyst, interfacial charge behavior, photoelectrochemistry, water-splitting

Procedia PDF Downloads 127
333 Language in Court: Ideology, Power and Cognition

Authors: Mehdi Damaliamiri

Abstract:

Undoubtedly, the power of language is hardly a new topic; indeed, the persuasive power of language accompanied by ideology has long been recognized in different aspects of life. The two and a half thousand-year-old Bisitun inscriptions in Iran, proclaiming the victories of the Persian King, Darius, are considered by some historians to have been an early example of the use of propaganda. Added to this, the modern age is the true cradle of fully-fledged ideologies and the ongoing process of centrifugal ideologization. The most visible work on ideology today within the field of linguistics is “Critical Discourse Analysis” (CDA). The focus of CDA is on “uncovering injustice, inequality, taking sides with the powerless and suppressed” and making “mechanisms of manipulation, discrimination, demagogy, and propaganda explicit and transparent.” possible way of relating language to ideology is to propose that ideology and language are inextricably intertwined. From this perspective, language is always ideological, and ideology depends on the language. All language use involves ideology, and so ideology is ubiquitous – in our everyday encounters, as much as in the business of the struggle for power within and between the nation-states and social statuses. At the same time, ideology requires language. Its key characteristics – its power and pervasiveness, its mechanisms for continuity and for change – all come out of the inner organization of language. The two phenomena are homologous: they share the same evolutionary trajectory. To get a more robust portrait of the power and ideology, we need to examine its potential place in the structure, and consider how such structures pattern in terms of the functional elements which organize meanings in the clause. This is based on the belief that all grammatical, including syntactic, knowledge is stored mentally as constructions have become immensely popular. When the structure of the clause is taken into account, the power and ideology have a preference for Complement over Subject and Adjunct. The subject is a central interpersonal element in discourse: it is one of two elements that form the central interactive nub of a proposition. Conceptually, there are countless ways of construing a given event and linguistically, a variety of grammatical devices that are usually available as alternate means of coding a given conception, such as political crime and corruption. In the theory of construal, then, which, like transitivity in Halliday, makes options available, Cognitive Linguistics can offer a cognitive account of ideology in language, where ideology is made possible by the choices a language allows for representing the same material situation in different ways. The possibility of promoting alternative construals of the same reality means that any particular choice in representation is always ideologically constrained or motivated and indicates the perspective and interests of the text-producer.

Keywords: power, ideology, court, discourse

Procedia PDF Downloads 150
332 Combustion Characteristics of Ionized Fuels for Battery System Safety

Authors: Hyeuk Ju Ko, Eui Ju Lee

Abstract:

Many electronic devices are powered by various rechargeable batteries such as lithium-ion today, but occasionally the batteries undergo thermal runaway and cause fire, explosion, and other hazards. If a battery fire should occur in an electronic device of vehicle and aircraft cabin, it is important to quickly extinguish the fire and cool the batteries to minimize safety risks. Attempts to minimize these risks have been carried out by many researchers but the number of study on the successful extinguishment is limited. Because most rechargeable batteries are operated on the ion state with electron during charge and discharge of electricity, and the reaction of this electrolyte has a big difference with normal combustion. Here, we focused on the effect of ions on reaction stability and pollutant emissions during combustion process. The other importance for understanding ionized fuel combustion could be found in high efficient and environment-friendly combustion technologies, which are used to be operated an extreme condition and hence results in unintended flame instability such as extinction and oscillation. The use of electromagnetic energy and non-equilibrium plasma is one of the way to solve the problems, but the application has been still limited because of lack of excited ion effects in the combustion process. Therefore, the understanding of ion role during combustion might be promised to the energy safety society including the battery safety. In this study, the effects of an ionized fuel on the flame stability and pollutant emissions were experimentally investigated in the hydrocarbon jet diffusion flames. The burner used in this experiment consisted of 7.5 mm diameter tube for fuel and the gaseous fuels were ionized with the ionizer (SUNJE, SPN-11). Methane (99.9% purity) and propane (commercial grade) were used as a fuel and open ambient air was used as an oxidizer. As the performance of ionizer used in the experiment was evaluated at first, ion densities of both propane and methane increased linearly with volume flow rate but the ion density of propane is slightly higher than that of methane. The results show that the overall flame stability and shape such as flame length has no significant difference even in the higher ion concentration. However, the fuel ionization affects to the pollutant emissions such as NOx and soot. NOx and CO emissions measured in post flame region decreased with increasing fuel ionization, especially at high fuel velocity, i.e. high ion density. TGA analysis and morphology of soot by TEM indicates that the fuel ionization makes soot to be matured.

Keywords: battery fires, ionization, jet flames, stability, NOx and soot

Procedia PDF Downloads 172
331 Population Diversity of Dalmatian Pyrethrum Based on Pyrethrin Content and Composition

Authors: Filip Varga, Nina Jeran, Martina Biosic, Zlatko Satovic, Martina Grdisa

Abstract:

Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip.), a species endemic to the eastern Adriatic coastline, is the source of natural insecticide pyrethrin. Pyrethrin is a mixture of six compounds (pyrethrin I and II, cinerin I and II, jasmolin I and II) that exhibits high insecticidal activity with no detrimental effects to the environment. A recently optimized matrix-solid phase dispersion method (MSPD), using florisil as the sorbent, acetone-ethyl acetate (1:1, v/v) as the elution solvent, and sodium sulfate anhydrous as the drying agent was utilized to extract the pyrethrins from 10 wild populations (20 individuals per population) distributed along the Croatian coast. All six components in the extracts were qualitatively and quantitatively determined by high-performance liquid chromatography with a diode array detector (HPLC-DAD). Pearson’s correlation index was calculated between pyrethrin compounds, and differences between the populations using the analysis of variance were tested. Additionally, the correlation of each pyrethrin component with spatio-ecological variables (bioclimate, soil properties, elevation, solar radiation, and distance from the coastline) was calculated. Total pyrethrin content ranged from 0.10% to 1.35% of dry flower weight, averaging 0.58% across all individuals. Analysis of variance revealed significant differences between populations based on all six pyrethrin compounds and total pyrethrin content. On average, the lowest total pyrethrin content was found in the population from Pelješac peninsula (0.22% of dry flower weight) in which total pyrethrin content lower than 0.18% was detected in 55% of the individuals. The highest average total pyrethrin content was observed in the population from island Zlarin (0.87% of dry flower weight), in which total pyrethrin content higher than 1.00% was recorded in only 30% of the individuals. Pyrethrin I/pyrethrin II ratio as a measure of extract quality ranged from 0.21 (population from the island Čiovo) to 5.88 (population from island Mali Lošinj) with an average of 1.77 across all individuals. By far, the lowest quality of extracts was found in the population from Mt. Biokovo (pyrethrin I/II ratio lower than 0.72 in 40% of individuals) due to the high pyrethrin II content typical for this population. Pearson’s correlation index revealed a highly significant positive correlation between pyrethrin I content and total pyrethrin content and a strong negative correlation between pyrethrin I and pyrethrin II. The results of this research clearly indicate high intra- and interpopulation diversity of Dalmatian pyrethrum with regards to pyrethrin content and composition. The information obtained has potential use in plant genetic resources conservation and biodiversity monitoring. Possibly the largest potential lies in designing breeding programs aimed at increasing pyrethrin content in commercial breeding lines and reintroduction in agriculture in Croatia. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).

Keywords: Dalmatian pyrethrum, HPLC, MSPD, pyrethrin

Procedia PDF Downloads 126
330 Corpora in Secondary Schools Training Courses for English as a Foreign Language Teachers

Authors: Francesca Perri

Abstract:

This paper describes a proposal for a teachers’ training course, focused on the introduction of corpora in the EFL didactics (English as a foreign language) of some Italian secondary schools. The training course is conceived as a part of a TEDD participant’s five months internship. TEDD (Technologies for Education: diversity and devices) is an advanced course held by the Department of Engineering and Information Technology at the University of Trento, Italy. Its main aim is to train a selected, heterogeneous group of graduates to engage with the complex interdependence between education and technology in modern society. The educational approach draws on a plural coexistence of various theories as well as socio-constructivism, constructionism, project-based learning and connectivism. TEDD educational model stands as the main reference source to the design of a formative course for EFL teachers, drawing on the digitalization of didactics and creation of learning interactive materials for L2 intermediate students. The training course lasts ten hours, organized into five sessions. In the first part (first and second session) a series of guided and semi-guided activities drive participants to familiarize with corpora through the use of a digital tools kit. Then, during the second part, participants are specifically involved in the realization of a ML (Mistakes Laboratory) where they create, develop and share digital activities according to their teaching goals with the use of corpora, supported by the digital facilitator. The training course takes place into an ICT laboratory where the teachers work either individually or in pairs, with a computer connected to a wi-fi connection, while the digital facilitator shares inputs, materials and digital assistance simultaneously on a whiteboard and on a digital platform where participants interact and work together both synchronically and diachronically. The adoption of good ICT practices is a fundamental step to promote the introduction and use of Corpus Linguistics in EFL teaching and learning processes, in fact dealing with corpora not only promotes L2 learners’ critical thinking and orienteering versus wild browsing when they are looking for ready-made translations or language usage samples, but it also entails becoming confident with digital tools and activities. The paper will explain reasons, limits and resources of the pedagogical approach adopted to engage EFL teachers with the use of corpora in their didactics through the promotion of digital practices.

Keywords: digital didactics, education, language learning, teacher training

Procedia PDF Downloads 141
329 Jurisdictional Federalism and Formal Federalism: Levels of Political Centralization on American and Brazilian Models

Authors: Henrique Rangel, Alexandre Fadel, Igor De Lazari, Bianca Neri, Carlos Bolonha

Abstract:

This paper promotes a comparative analysis of American and Brazilian models of federalism assuming their levels of political centralization as main criterion. The central problem faced herein is the Brazilian approach of Unitarian regime. Although the hegemony of federative form after 1989, Brazil had a historical frame of political centralization that remains under the 1988 constitutional regime. Meanwhile, United States framed a federalism in which States absorb significant authorities. The hypothesis holds that the amount of alternative criteria of federalization – which can generate political centralization –, and the way they are upheld on judicial review, are crucial to understand the levels of political centralization achieved in each model. To test this hypothesis, the research is conducted by a methodology temporally delimited to 1994-2014 period. Three paradigmatic precedents of U.S. Supreme Court were selected: United States vs. Morrison (2000), on gender-motivated violence, Gonzales vs. Raich (2005), on medical use of marijuana, and United States vs. Lopez (1995), on firearm possession on scholar zones. These most relevant cases over federalism in the recent activity of Supreme Court indicates a determinant parameter of deliberation: the commerce clause. After observe the criterion used to permit or prohibit the political centralization in America, the Brazilian normative context is presented. In this sense, it is possible to identify the eventual legal treatment these controversies could receive in this Country. The decision-making reveals some deliberative parameters, which characterizes each federative model. At the end of research, the precedents of Rehnquist Court promote a broad revival of federalism debate, establishing the commerce clause as a secure criterion to uphold or not the necessity of centralization – even with decisions considered conservative. Otherwise, the Brazilian federalism solves them controversies upon in a formalist fashion, within numerous and comprehensive – sometimes casuistic too – normative devices, oriented to make an intense centralization. The aim of this work is indicate how jurisdictional federalism found in United States can preserve a consistent model with States robustly autonomous, while Brazil gives preference to normative mechanisms designed to starts from centralization.

Keywords: constitutional design, federalism, U.S. Supreme Court, legislative authority

Procedia PDF Downloads 505
328 A Method and System for Secure Authentication Using One Time QR Code

Authors: Divyans Mahansaria

Abstract:

User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.

Keywords: authentication, QR code, cipher / decipher text, one time password, secret information

Procedia PDF Downloads 260
327 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 126
326 Modification of Carbon-Based Gas Sensors for Boosting Selectivity

Authors: D. Zhao, Y. Wang, G. Chen

Abstract:

Gas sensors that utilize carbonaceous materials as sensing media offer numerous advantages, making them the preferred choice for constructing chemical sensors over those using other sensing materials. Carbonaceous materials, particularly nano-sized ones like carbon nanotubes (CNTs), provide these sensors with high sensitivity. Additionally, carbon-based sensors possess other advantageous properties that enhance their performance, including high stability, low power consumption for operation, and cost-effectiveness in their construction. These properties make carbon-based sensors ideal for a wide range of applications, especially in miniaturized devices created through MEMS or NEMS technologies. To capitalize on these properties, a group of chemoresistance-type carbon-based gas sensors was developed and tested against various volatile organic compounds (VOCs) and volatile inorganic compounds (VICs). The results demonstrated exceptional sensitivity to both VOCs and VICs, along with the sensor’s long-term stability. However, this broad sensitivity also led to poor selectivity towards specific gases. This project aims at addressing the selectivity issue by modifying the carbon-based sensing materials and enhancing the sensor's specificity to individual gas. Multiple groups of sensors were manufactured and modified using proprietary techniques. To assess their performance, we conducted experiments on representative sensors from each group to detect a range of VOCs and VICs. The VOCs tested included acetone, dimethyl ether, ethanol, formaldehyde, methane, and propane. The VICs comprised carbon monoxide (CO), carbon dioxide (CO2), hydrogen (H2), nitric oxide (NO), and nitrogen dioxide (NO2). The concentrations of the sample gases were all set at 50 parts per million (ppm). Nitrogen (N2) was used as the carrier gas throughout the experiments. The results of the gas sensing experiments are as follows. In Group 1, the sensors exhibited selectivity toward CO2, acetone, NO, and NO2, with NO2 showing the highest response. Group 2 primarily responded to NO2. Group 3 displayed responses to nitrogen oxides, i.e., both NO and NO2, with NO2 slightly surpassing NO in sensitivity. Group 4 demonstrated the highest sensitivity among all the groups toward NO and NO2, with NO2 being more sensitive than NO. In conclusion, by incorporating several modifications using carbon nanotubes (CNTs), sensors can be designed to respond well to NOx gases with great selectivity and without interference from other gases. Because the response levels to NO and NO2 from each group are different, the individual concentration of NO and NO2 can be deduced.

Keywords: gas sensors, carbon, CNT, MEMS/NEMS, VOC, VIC, high selectivity, modification of sensing materials

Procedia PDF Downloads 109
325 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland

Authors: A. Sgobba, C. Meskell

Abstract:

The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.

Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources

Procedia PDF Downloads 120
324 Ultra-Wideband Antennas for Ultra-Wideband Communication and Sensing Systems

Authors: Meng Miao, Jeongwoo Han, Cam Nguyen

Abstract:

Ultra-wideband (UWB) time-domain impulse communication and radar systems use ultra-short duration pulses in the sub-nanosecond regime, instead of continuous sinusoidal waves, to transmit information. The pulse directly generates a very wide-band instantaneous signal with various duty cycles depending on specific usages. In UWB systems, the total transmitted power is spread over an extremely wide range of frequencies; the power spectral density is extremely low. This effectively results in extremely small interference to other radio signals while maintains excellent immunity to interference from these signals. UWB devices can therefore work within frequencies already allocated for other radio services, thus helping to maximize this dwindling resource. Therefore, impulse UWB technique is attractive for realizing high-data-rate, short-range communications, ground penetrating radar (GPR), and military radar with relatively low emission power levels. UWB antennas are the key element dictating the transmitted and received pulse shape and amplitude in both time and frequency domain. They should have good impulse response with minimal distortion. To facilitate integration with transmitters and receivers employing microwave integrated circuits, UWB antennas enabling direct integration are preferred. We present the development of two UWB antennas operating from 3.1 to 10.6 GHz and 0.3-6 GHz for UWB systems that provide direct integration with microwave integrated circuits. The operation of these antennas is based on the principle of wave propagation on a non-uniform transmission line. Time-domain EM simulation is conducted to optimize the antenna structures to minimize reflections occurring at the open-end transition. Calculated and measured results of these UWB antennas are presented in both frequency and time domains. The antennas have good time-domain responses. They can transmit and receive pulses effectively with minimum distortion, little ringing, and small reflection, clearly demonstrating the signal fidelity of the antennas in reproducing the waveform of UWB signals which is critical for UWB sensors and communication systems. Good performance together with seamless microwave integrated-circuit integration makes these antennas good candidates not only for UWB applications but also for integration with printed-circuit UWB transmitters and receivers.

Keywords: antennas, ultra-wideband, UWB, UWB communication systems, UWB radar systems

Procedia PDF Downloads 226
323 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 60
322 TiO₂ Nanotube Array Based Selective Vapor Sensors for Breath Analysis

Authors: Arnab Hazra

Abstract:

Breath analysis is a quick, noninvasive and inexpensive technique for disease diagnosis can be used on people of all ages without any risk. Only a limited number of volatile organic compounds (VOCs) can be associated with the occurrence of specific diseases. These VOCs can be considered as disease markers or breath markers. Selective detection with specific concentration of breath marker in exhaled human breath is required to detect a particular disease. For example, acetone (C₃H₆O), ethanol (C₂H₅OH), ethane (C₂H₆) etc. are the breath markers and abnormal concentrations of these VOCs in exhaled human breath indicates the diseases like diabetes mellitus, renal failure, breast cancer respectively. Nanomaterial-based vapor sensors are inexpensive, small and potential candidate for the detection of breath markers. In practical measurement, selectivity is the most crucial issue where trace detection of breath marker is needed to identify accurately in the presence of several interfering vapors and gases. Current article concerns a novel technique for selective and lower ppb level detection of breath markers at very low temperature based on TiO₂ nanotube array based vapor sensor devices. Highly ordered and oriented TiO₂ nanotube array was synthesized by electrochemical anodization of high purity tatinium (Ti) foil. 0.5 wt% NH₄F, ethylene glycol and 10 vol% H₂O was used as the electrolyte and anodization was carried out for 90 min with 40 V DC potential. Au/TiO₂ Nanotube/Ti, sandwich type sensor device was fabricated for the selective detection of VOCs in low concentration range. Initially, sensor was characterized where resistive and capacitive change of the sensor was recorded within the valid concentration range for individual breath markers (or organic vapors). Sensor resistance was decreased and sensor capacitance was increased with the increase of vapor concentration. Now, the ratio of resistive slope (mR) and capacitive slope (mC) provided a concentration independent constant term (M) for a particular vapor. For the detection of unknown vapor, ratio of resistive change and capacitive change at any concentration was same to the previously calculated constant term (M). After successful identification of the target vapor, concentration was calculated from the straight line behavior of resistance as a function of concentration. Current technique is suitable for the detection of particular vapor from a mixture of other interfering vapors.

Keywords: breath marker, vapor sensors, selective detection, TiO₂ nanotube array

Procedia PDF Downloads 148
321 Establishing the Legality of Terraforming under the Outer Space Treaty

Authors: Bholenath

Abstract:

Ever since Elon Musk revealed his plan to terraform Mars on national television in 2015, the debate regarding the legality of such an activity under the current Outer Space Treaty regime is gaining momentum. Terraforming means to alter or transform the atmosphere of another planet to have the characteristics of landscapes on Earth. Musk’s plan is to alter the entire environment of Mars so as to make it habitable for humans. He has long been an advocate of colonizing Mars, and in order to make humans an interplanetary species; he wants to detonate thermonuclear devices over the poles of Mars. For a common man, it seems to be a fascinating endeavor, but for space lawyers, it poses new and fascinating legal questions. Some of the questions which arise are whether the use of nuclear weapons on celestial bodies is permitted under the Outer Space Treaty? Whether such an alteration of the celestial environment would fall within the scope of the term 'harmful contamination' under Article IX of the treaty? Whether such an activity which would put an entire planet under the control of a private company can be permitted under the treaty? Whether such terraforming of Mars would amount to its appropriation? Whether such an activity would be in the 'benefit and interests of all countries'? This paper will be attempt to examine and elucidate upon these legal questions. Space is one such domain where the law should precede man. The paper follows the approach that the de lege lata is not capable of prohibiting the terraforming of Mars. Outer Space Treaty provides the freedoms of space and prescribes certain restrictions on those freedoms as well. The author shall examine the provisions such as Article I, II, IV, and IX of the Outer Space Treaty in order to establish the legality of terraforming activity. The author shall establish how such activity is peaceful use of the celestial body, is in the benefit and interests of all countries, and does neither qualify as national appropriation of the celestial body nor as its harmful contamination. The author shall divide the paper into three chapters. The first chapter would be about the general introduction of the problem, the analysis of Elon Musk’s plan to terraform Mars, and the need to study terraforming from the lens of the Outer Space Treaty. In the second chapter, the author shall attempt to establish the legality of the terraforming activity under the provisions of the Outer Space Treaty. In this vein, the author shall put forth the counter interpretations and the arguments which may be formulated against the lawfulness of terraforming. The author shall show as to why the counter interpretations establishing the unlawfulness of terraforming should not be accepted, and in doing so, the author shall provide the interpretations that should prevail and ultimately establishes the legality of terraforming activity under the treaty. In the third chapter, the author shall draw relevant conclusions and give suggestions.

Keywords: appropriation, harmful contamination, peaceful, terraforming

Procedia PDF Downloads 140
320 Modeling of an Insulin Mircopump

Authors: Ahmed Slami, Med El Amine Brixi Nigassa, Nassima Labdelli, Sofiane Soulimane, Arnaud Pothier

Abstract:

Many people suffer from diabetes, a disease marked by abnormal levels of sugar in the blood; 285 million people have diabetes, 6.6% of the world adult population (in 2010), according to the International Diabetes Federation. Insulin medicament is invented to be injected into the body. Generally, the injection requires the patient to do it manually. However, in many cases he will be unable to inject the drug, saw that among the side effects of hyperglycemia is the weakness of the whole body. The researchers designed a medical device that injects insulin too autonomously by using micro-pumps. Many micro-pumps of concepts have been investigated during the last two decades for injecting molecules in blood or in the body. However, all these micro-pumps are intended for slow infusion of drug (injection of few microliters by minute). Now, the challenge is to develop micro-pumps for fast injections (1 microliter in 10 seconds) with accuracy of the order of microliter. Recently, studies have shown that only piezoelectric actuators can achieve this performance, knowing that few systems at the microscopic level were presented. These reasons lead us to design new smart microsystems injection drugs. Therefore, many technological advances are still to achieve the improvement of materials to their uses, while going through their characterization and modeling action mechanisms themselves. Moreover, it remains to study the integration of the piezoelectric micro-pump in the microfluidic platform features to explore and evaluate the performance of these new micro devices. In this work, we propose a new micro-pump model based on piezoelectric actuation with a new design. Here, we use a finite element model with Comsol software. Our device is composed of two pumping chambers, two diaphragms and two actuators (piezoelectric disks). The latter parts will apply a mechanical force on the membrane in a periodic manner. The membrane deformation allows the fluid pumping, the suction and discharge of the liquid. In this study, we present the modeling results as function as device geometry properties, films thickness, and materials properties. Here, we demonstrate that we can achieve fast injection. The results of these simulations will provide quantitative performance of our micro-pumps. Concern the spatial actuation, fluid rate and allows optimization of the fabrication process in terms of materials and integration steps.

Keywords: COMSOL software, piezoelectric, micro-pump, microfluidic

Procedia PDF Downloads 334
319 Sustainable Pavements with Reflective and Photoluminescent Properties

Authors: A.H. Martínez, T. López-Montero, R. Miró, R. Puig, R. Villar

Abstract:

An alternative to mitigate the heat island effect is to pave streets and sidewalks with pavements that reflect incident solar energy, keeping their surface temperature lower than conventional pavements. The “Heat island mitigation to prevent global warming by designing sustainable pavements with reflective and photoluminescent properties (RELUM) Project” has been carried out with this intention in mind. Its objective has been to develop bituminous mixtures for urban pavements that help in the fight against global warming and climate change, while improving the quality of life of citizens. The technology employed has focused on the use of reflective pavements, using bituminous mixes made with synthetic bitumens and light pigments that provide high solar reflectance. In addition to this advantage, the light surface colour achieved with these mixes can improve visibility, especially at night. In parallel and following the latter approach, an appropriate type of treatment has also been developed on bituminous mixtures to make them capable of illuminating at night, giving rise to photoluminescent applications, which can reduce energy consumption and increase road safety due to improved night-time visibility. The work carried out consisted of designing different bituminous mixtures in which the nature of the aggregate was varied (porphyry, granite and limestone) and also the colour of the mixture, which was lightened by adding pigments (titanium dioxide and iron oxide). The reflectance of each of these mixtures was measured, as well as the temperatures recorded throughout the day, at different times of the year. The results obtained make it possible to propose bituminous mixtures whose characteristics can contribute to the reduction of urban heat islands. Among the most outstanding results is the mixture made with synthetic bitumen, white limestone aggregate and a small percentage of titanium dioxide, which would be the most suitable for urban surfaces without road traffic, given its high reflectance and the greater temperature reduction it offers. With this solution, a surface temperature reduction of 9.7°C is achieved at the beginning of the night in the summer season with the highest radiation. As for luminescent pavements, paints with different contents of strontium aluminate and glass microspheres have been applied to asphalt mixtures, and the luminance of all the applications designed has been measured by exciting them with electric bulbs that simulate the effect of sunlight. The results obtained at this stage confirm the ability of all the designed dosages to emit light for a certain time, varying according to the proportions used. Not only the effect of the strontium aluminate and microsphere content has been observed, but also the influence of the colour of the base on which the paint is applied; the lighter the base, the higher the luminance. Ongoing studies are focusing on the evaluation of the durability of the designed solutions in order to determine their lifetime.

Keywords: heat island, luminescent paints, reflective pavement, temperature reduction

Procedia PDF Downloads 4
318 Functionalizing Gold Nanostars with Ninhydrin as Vehicle Molecule for Biomedical Applications

Authors: Swati Mishra

Abstract:

In recent years, there has been an explosion in Gold NanoParticle (GNP) research, with a rapid increase in publications in diverse fields, including imaging, bioengineering, and molecular biology. GNPs exhibit unique physicochemical properties, including surface plasmon resonance (SPR) and bind amine and thiol groups, allowing surface modification and use in biomedical applications. Nanoparticle functionalization is the subject of intense research at present, with rapid progress being made towards developing biocompatible, multi-functional particles. In the present study, the photochemical method has been done to functionalize various-shaped GNPs like nanostars by the molecules like ninhydrin. Ninhydrin is bactericidal, virucidal, fungicidal, antigen-antibody reactive, and used in fingerprint technology in forensics. The GNPs functionalized with ninhydrin efficiently will bind to the amino acids on the target protein, which is of eminent importance during the pandemic, especially where long-term treatments of COVID- 19 bring many side effects of the drugs. The photochemical method is adopted as it provides low thermal load, selective reactivity, selective activation, and controlled radiation in time, space, and energy. The GNPs exhibit their characteristic spectrum, but a distinctly blue or redshift in the peak will be observed after UV irradiation, ensuring efficient ninhydrin binding. Now, the bound ninhydrin in the GNP carrier, upon chemically reacting with any amino acid, will lead to the formation of Rhumann purple. A common method of GNP production includes citrate reduction of Au [III] derivatives such as aurochloric acid (HAuCl4) in water to Au [0] through a one-step synthesis of size-tunable GNPs. The following reagents are prepared to validate the approach. Reagent A solution 1 is0.0175 grams ninhydrin in 5 ml Millipore water Reagent B 30 µl of HAuCl₄.3H₂O in 3 ml of solution 1 Reagent C 1 µl of gold nanostars in 3 ml of solution 1 Reagent D 6 µl of cetrimonium bromide (CTAB) in 3 ml of solution1 ReagentE 1 µl of gold nanostars in 3 ml of ethanol ReagentF 30 µl of HAuCl₄.₃H₂O in 3 ml of ethanol ReagentG 30 µl of HAuCl₄.₃H₂O in 3 ml of solution 2 ReagentH solution 2 is0.0087 grams ninhydrin in 5 ml Millipore water ReagentI 30 µl of HAuCl₄.₃H₂O in 3 ml of water The reagents were irradiated at 254 nm for 15 minutes, followed by their UV Visible spectroscopy. The wavelength was selected based on the one reported for excitation of a similar molecule Pthalimide. It was observed that the solution B and G deviate around 600 nm, while C peaks distinctively at 567.25 nm and 983.9 nm. Though it is tough to say about the chemical reaction happening, butATR-FTIR of reagents will ensure that ninhydrin is not forming Rhumann purple in the absence of amino acids. Therefore, these experiments, we achieved the functionalization of gold nanostars with ninhydrin corroborated by the deviation in the spectrum obtained in a mixture of GNPs and ninhydrin irradiated with UV light. It prepares them as a carrier molecule totake up amino acids for targeted delivery or germicidal action.

Keywords: gold nanostars, ninhydrin, photochemical method, UV visible specgtroscopy

Procedia PDF Downloads 137
317 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites

Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria

Abstract:

Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.

Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles

Procedia PDF Downloads 249
316 Finite Element Modelling of Mechanical Connector in Steel Helical Piles

Authors: Ramon Omar Rosales-Espinoza

Abstract:

Pile-to-pile mechanical connections are used if the depth of the soil layers with sufficient bearing strength exceeds the original (“leading”) pile length, with the additional pile segment being termed “extension” pile. Mechanical connectors permit a safe transmission of forces from leading to extension pile while meeting strength and serviceability requirements. Common types of connectors consist of an assembly of sleeve-type external couplers, bolts, pins, and other mechanical interlock devices that ensure the transmission of compressive, tensile, torsional and bending stresses between leading and extension pile segments. While welded connections allow for a relatively simple structural design, mechanical connections are advantageous over welded connections because they lead to shorter installation times and significant cost reductions since specialized workmanship and inspection activities are not required. However, common practices followed to design mechanical connectors neglect important aspects of the assembly response, such as stress concentration around pin/bolt holes, torsional stresses from the installation process, and interaction between the forces at the installation (torsion), service (compression/tension-bending), and removal stages (torsion). This translates into potentially unsatisfactory designs in terms of the ultimate and service limit states, exhibiting either reduced strength or excessive deformations. In this study, the experimental response under compressive forces of a type of mechanical connector is presented, in terms of strength, deformation and failure modes. The tests revealed that the type of connector used can safely transmit forces from pile to pile. Using the results from the compressive tests, an analysis model was developed using the finite element (FE) method to study the interaction of forces under installation and service stages of a typical mechanical connector. The response of the analysis model is used to identify potential areas for design optimization, including size, gap between leading and extension piles, number of pin/bolts, hole sizes, and material properties. The results show the design of mechanical connectors should take into account the interaction of forces present at every stage of their life cycle, and that the torsional stresses occurring during installation are critical for the safety of the assembly.

Keywords: piles, FEA, steel, mechanical connector

Procedia PDF Downloads 253
315 Preparation of IPNs and Effect of Swift Heavy Ions Irradiation on their Physico-Chemical Properties

Authors: B. S Kaith, K. Sharma, V. Kumar, S. Kalia

Abstract:

Superabsorbent are three-dimensional networks of linear or branched polymeric chains which can uptake large volume of biological fluids. The ability is due to the presence of functional groups like –NH2, -COOH and –OH. Such cross-linked products based on natural materials, such as cellulose, starch, dextran, gum and chitosan, because of their easy availability, low production cost, non-toxicity and biodegradability have attracted the attention of Scientists and Technologists all over the world. Since natural polymers have better biocompatibility and are non-toxic than most synthetic one, therefore, such materials can be applied in the preparation of controlled drug delivery devices, biosensors, tissue engineering, contact lenses, soil conditioning, removal of heavy metal ions and dyes. Gums are natural potential antioxidants and are used as food additives. They have excellent properties like high solubility, pH stability, non-toxicity and gelling characteristics. Till date lot of methods have been applied for the synthesis and modifications of cross-linked materials with improved properties suitable for different applications. It is well known that ion beam irradiation can play a crucial role to synthesize, modify, crosslink or degrade polymeric materials. High energetic heavy ions irradiation on polymer film induces significant changes like chain scission, cross-linking, structural changes, amorphization and degradation in bulk. Various researchers reported the effects of low and heavy ion irradiation on the properties of polymeric materials and observed significant improvement in optical, electrical, chemical, thermal and dielectric properties. Moreover, modifications induced in the materials mainly depend on the structure, the ion beam parameters like energy, linear energy transfer, fluence, mass, charge and the nature of the target material. Ion-beam irradiation is a useful technique for improving the surface properties of biodegradable polymers without missing the bulk properties. Therefore, a considerable interest has been grown to study the effects of SHIs irradiation on the properties of synthesized semi-IPNs and IPNs. The present work deals with the preparation of semi-IPNs and IPNs and impact of SHI like O7+ and Ni9+ irradiation on optical, chemical, structural, morphological and thermal properties along with impact on different applications. The results have been discussed on the basis of Linear Energy Transfer (LET) of the ions.

Keywords: adsorbent, gel, IPNs, semi-IPNs

Procedia PDF Downloads 360
314 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition

Authors: Karolina Mazur, Stanislaw Kuciel

Abstract:

Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.

Keywords: biocomposites, PLA, PHA, storage

Procedia PDF Downloads 255
313 Influence of High-Resolution Satellites Attitude Parameters on Image Quality

Authors: Walid Wahballah, Taher Bazan, Fawzy Eltohamy

Abstract:

One of the important functions of the satellite attitude control system is to provide the required pointing accuracy and attitude stability for optical remote sensing satellites to achieve good image quality. Although offering noise reduction and increased sensitivity, time delay and integration (TDI) charge coupled devices (CCDs) utilized in high-resolution satellites (HRS) are prone to introduce large amounts of pixel smear due to the instability of the line of sight. During on-orbit imaging, as a result of the Earth’s rotation and the satellite platform instability, the moving direction of the TDI-CCD linear array and the imaging direction of the camera become different. The speed of the image moving on the image plane (focal plane) represents the image motion velocity whereas the angle between the two directions is known as the drift angle (β). The drift angle occurs due to the rotation of the earth around its axis during satellite imaging; affecting the geometric accuracy and, consequently, causing image quality degradation. Therefore, the image motion velocity vector and the drift angle are two important factors used in the assessment of the image quality of TDI-CCD based optical remote sensing satellites. A model for estimating the image motion velocity and the drift angle in HRS is derived. The six satellite attitude control parameters represented in the derived model are the (roll angle φ, pitch angle θ, yaw angle ψ, roll angular velocity φ֗, pitch angular velocity θ֗ and yaw angular velocity ψ֗ ). The influence of these attitude parameters on the image quality is analyzed by establishing a relationship between the image motion velocity vector, drift angle and the six satellite attitude parameters. The influence of the satellite attitude parameters on the image quality is assessed by the presented model in terms of modulation transfer function (MTF) in both cross- and along-track directions. Three different cases representing the effect of pointing accuracy (φ, θ, ψ) bias are considered using four different sets of pointing accuracy typical values, while the satellite attitude stability parameters are ideal. In the same manner, the influence of satellite attitude stability (φ֗, θ֗, ψ֗) on image quality is also analysed for ideal pointing accuracy parameters. The results reveal that cross-track image quality is influenced seriously by the yaw angle bias and the roll angular velocity bias, while along-track image quality is influenced only by the pitch angular velocity bias.

Keywords: high-resolution satellites, pointing accuracy, attitude stability, TDI-CCD, smear, MTF

Procedia PDF Downloads 393
312 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 95