Search results for: mobile technological devices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5208

Search results for: mobile technological devices

258 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger

Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez

Abstract:

One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.

Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation

Procedia PDF Downloads 120
257 Garnet-based Bilayer Hybrid Solid Electrolyte for High-Voltage Cathode Material Modified with Composite Interface Enabler on Lithium-Metal Batteries

Authors: Kumlachew Zelalem Walle, Chun-Chen Yang

Abstract:

Solid-state lithium metal batteries (SSLMBs) are considered promising candidates for next-generation energy storage devices due to their superior energy density and excellent safety. However, recent findings have shown that the formation of lithium (Li) dendrites in SSLMBs still exhibits a terrible growth ability, which makes the development of SSLMBs have to face the challenges posed by the Li dendrite problem. In this work, an inorganic/organic mixture coating material (g-C3N4/ZIF-8/PVDF) was used to modify the surface of lithium metal anode (LMA). Then the modified LMA (denoted as g-C₃N₄@Li) was assembled with lithium nafion (LiNf) coated commercial NCM811 (LiNf@NCM811) using a bilayer hybrid solid electrolyte (Bi-HSE) that incorporated 20 wt.% (vs. polymer) LiNf coated Li6.05Ga0.25La3Zr2O11.8F0.2 ([email protected]) filler faced to the positive electrode and the other layer with 80 wt.% (vs. polymer) filler content faced to the g-C₃N₄@Li. The garnet-type Li6.05Ga0.25La3Zr2O11.8F0.2 (LG0.25LZOF) solid electrolyte was prepared via co-precipitation reaction process from Taylor flow reactor and modified using lithium nafion (LiNf), a Li-ion conducting polymer. The Bi-HSE exhibited high ionic conductivity of 6.8  10–4 S cm–1 at room temperature, and a wide electrochemical window (0–5.0 V vs. Li/Li+). The coin cell was charged between 2.8 to 4.5 V at 0.2C and delivered an initial specific discharge capacity of 194.3 mAh g–1 and after 100 cycles it maintained 81.8% of its initial capacity at room temperature. The presence of a nano-sheet g-C3N4/ZIF-8/PVDF as a composite coating material on the LMA surface suppress the dendrite growth and enhance the compatibility as well as the interfacial contact between anode/electrolyte membrane. The g-C3N4@Li symmetrical cells incorporating this hybrid electrolyte possessed excellent interfacial stability over 1000 h at 0.1 mA cm–2 and a high critical current density (1 mA cm–2). Moreover, the in-situ formation of Li3N on the solid electrolyte interface (SEI) layer as depicted from the XPS result also improves the ionic conductivity and interface contact during the charge/discharge process. Therefore, these novel multi-layered fabrication strategies of hybrid/composite solid electrolyte membranes and modification of the LMA surface using mixed coating materials have potential applications in the preparation of highly safe high-voltage cathodes for SSLMBs.

Keywords: high-voltage cathodes, hybrid solid electrolytes, garnet, graphitic-carbon nitride (g-C3N4), ZIF-8 MOF

Procedia PDF Downloads 38
256 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 88
255 Optimizing the Pair Carbon Xerogels-Electrolyte for High Performance Supercapacitors

Authors: Boriana Karamanova, Svetlana Veleva, Luybomir Soserov, Ana Arenillas, Francesco Lufrano, Antonia Stoyanova

Abstract:

Supercapacitors have received a lot of research attention and are promising energy storage devices due to their high power and long cycle life. In order to developed an advanced device with significant capacity for storing charge and cheap carbon materials, efforts must focus not only on improving synthesis by controlling the morphology and pore size but also on improving electrode-electrolyte compatibility of the resulting systems. The present study examines the relationship between the surface chemistry of two activated carbon xerogels, the electrolyte type, and the electrochemical properties of supercapacitors. Activated carbon xerogels were prepared by varying the initial pH of the resorcinol-formaldehyde aqueous solution. The materials produced are physicochemical characterized by DTA/TGA, porous characterization, and SEM analysis. The carbon xerogel based electrodes were prepared by spreading over glass plate a slurry containing the carbon gel, graphite, and poly vinylidene difluoride (PVDF) binder. The layer formed was dried consecutively at different temperatures and then detached by water. After, the layer was dried again to improve its mechanical stability. The developed electrode materials and the Aquivion® E87-05S membrane (Solvay Specialty Polymers), socked in Na2SO4 as a polymer electrolyte, were used to assembly the solid-state supercapacitor. Symmetric supercapacitor cells composed by same electrodes and 1 M KOH electrolytes are also assembled and tested for comparison. The supercapacitor performances are verified by different electrochemical methods - cyclic voltammetry, galvanostatic charge/discharge measurements, electrochemical impedance spectroscopy, and long-term durability tests in neutral and alkaline electrolytes. Specific capacitances, energy, and power density, energy efficiencies, and durability were compared into studied supercapacitors. Ex-situ physicochemical analyses on the synthesized materials have also been performed, which provide information about chemical and structural changes in the electrode morphology during charge / discharge durability tests. They are discussed on the basis of electrode-electrolyte interaction. The obtained correlations could be of significance in order to design sustainable solid-state supercapacitors with high power and energy density. Acknowledgement: This research is funded by the Ministry of Education and Science of Bulgaria under the National Program "European Scientific Networks" (Agreement D01-286 / 07.10.2020, D01-78/30.03.2021). Authors gratefully acknowledge.

Keywords: carbon xerogel, electrochemical tests, neutral and alkaline electrolytes, supercapacitors

Procedia PDF Downloads 108
254 The Impact of the Virtual Learning Environment on Teacher's Pedagogy and Student's Learning in Primary School Setting

Authors: Noor Ashikin Omar

Abstract:

The rapid growth and advancement in information and communication technology (ICT) at a global scene has greatly influenced and revolutionised interaction amongst society. The use of ICT has become second nature in managing everyday lives, particularly in the education environment. Traditional learning methods of using blackboards and chalks have been largely improved by the use of ICT devices such as interactive whiteboards and computers in school. This paper aims to explore the impacts of virtual learning environments (VLE) on teacher’s pedagogy and student’s learning in primary school settings. The research was conducted in two phases. Phase one of this study comprised a short interview with the school’s senior assistants to examine issues and challenges faced during planning and implementation of FrogVLE in their respective schools. Phase two involved a survey of a number of questionnaires directed to three major stakeholders; the teachers, students and parents. The survey intended to explore teacher’s and student’s perspective and attitude towards the use of VLE as a teaching and learning medium and as a learning experience as a whole. In addition, the survey from parents provided insights on how they feel towards the use of VLE for their child’s learning. Collectively, the two phases enable improved understanding and provided observations on factors that had affected the implementation of the VLE into primary schools. This study offers the voices of the students which were frequently omitted when addressing innovations as well as teachers who may not always be heard. It is also significant in addressing the importance of teacher’s pedagogy on students’ learning and its effects to enable more effective ICT integration with a student-centred approach. Finally, parental perceptions in the implementation of VLE in supporting their children’s learning have been implicated as having a bearing on educational achievement. The results indicate that the all three stakeholders were positive and highly supportive towards the use of VLE in schools. They were able to understand the benefits of moving towards the modern method of teaching using ICT and accept the change in the education system. However, factors such as condition of ICT facilities at schools and homes as well as inadequate professional development for the teachers in both ICT skills and management skills hindered exploitation of the VLE system in order to fully utilise its benefits. Social influences within different communities and cultures and costs of using the technology also has a significant impact. The findings of this study are important to the Malaysian Ministry of Education because it informs policy makers on the impact of the Virtual Learning Environment (VLE) on teacher’s pedagogy and learning of Malaysian primary school children. The information provided to policy makers allows them to make a sound judgement and enables an informed decision making.

Keywords: attitudes towards virtual learning environment (VLE), parental perception, student's learning, teacher's pedagogy

Procedia PDF Downloads 184
253 The Effects of a Hippotherapy Simulator in Children with Cerebral Palsy: A Pilot Study

Authors: Canan Gunay Yazici, Zubeyir Sarı, Devrim Tarakci

Abstract:

Background: Hippotherapy considered as global techniques used in rehabilitation of children with cerebral palsy as it improved gait pattern, balance, postural control, balance and gross motor skills development but it encounters some problems (such as the excess of the cost of horses' care, nutrition, housing). Hippotherapy simulator is being developed in recent years to overcome these problems. These devices aim to create the effects of hippotherapy made with a real horse on patients by simulating the movements of a real horse. Objectives: To evaluate the efficacy of hippotherapy simulator on gross motor functions, sitting postural control and dynamic balance of children with cerebral palsy (CP). Methods: Fourteen children with CP, aged 6–15 years, seven with a diagnosis of spastic hemiplegia, five of diplegia, two of triplegia, Gross Motor Function Classification System level I-III. The Horse Riding Simulator (HRS), including four-speed program (warm-up, level 1-2-3), was used for hippotherapy simulator. Firstly, each child received Neurodevelopmental Therapy (NDT; 45min twice weekly eight weeks). Subsequently, the same children completed HRS+NDT (30min and 15min respectively, twice weekly eight weeks). Children were assessed pre-treatment, at the end of 8th and 16th week. Gross motor function, sitting postural control, dynamic sitting and standing balance were evaluated by Gross Motor Function Measure-88 (GMFM-88, Dimension B, D, E and Total Score), Trunk Impairment Scale (TIS), Pedalo® Sensamove Balance Test and Pediatric Balance Scale (PBS) respectively. Unit of Scientific Research Project of Marmara University supported our study. Results: All measured variables were a significant increase compared to baseline values after both intervention (NDT and HRS+NDT), except for dynamic sitting balance evaluated by Pedalo®. Especially HRS+NDT, increase in the measured variables was considerably higher than NDT. After NDT, the Total scores of GMFM-88 (mean baseline 62,2 ± 23,5; mean NDT: 66,6 ± 22,2; p < 0,05), TIS (10,4 ± 3,4; 12,1 ± 3; p < 0,05), PBS (37,4 ± 14,6; 39,6 ± 12,9; p < 0,05), Pedalo® sitting (91,2 ± 6,7; 92,3 ± 5,2; p > 0,05) and Pedalo® standing balance points (80,2 ± 10,8; 82,5 ± 11,5; p < 0,05) increased by 7,1%, 2%, 3,9%, 5,2% and 6 % respectively. After HRS+NDT treatment, the total scores of GMFM-88 (mean baseline: 62,2 ± 23,5; mean HRS+NDT: 71,6 ± 21,4; p < 0,05), TIS (10,4 ± 3,4; 15,6 ± 2,9; p < 0,05), PBS (37,4 ± 14,6; 42,5 ± 12; p < 0,05), Pedalo® sitting (91,2 ± 6,7; 93,8 ± 3,7; p > 0,05) and standing balance points (80,2 ± 10,8; 86,2 ± 5,6; p < 0,05) increased by 15,2%, 6%, 7,3%, 6,4%, and 11,9%, respectively, compared to the initial values. Conclusion: Neurodevelopmental therapy provided significant improvements in gross motor functions, sitting postural control, sitting and standing balance of children with CP. When the hippotherapy simulator added to the treatment program, it was observed that these functions were further developed (especially with gross motor functions and dynamic balance). As a result, this pilot study showed that the hippotherapy simulator could be a useful alternative to neurodevelopmental therapy for the improvement of gross motor function, sitting postural control and dynamic balance of children with CP.

Keywords: balance, cerebral palsy, hippotherapy, rehabilitation

Procedia PDF Downloads 118
252 Isoflavonoid Dynamic Variation in Red Clover Genotypes

Authors: Andrés Quiroz, Emilio Hormazábal, Ana Mutis, Fernando Ortega, Loreto Méndez, Leonardo Parra

Abstract:

Red clover root borer, Hylastinus obscurus Marsham (Coleoptera: Curculionidae), is the main insect pest associated to red clover, Trifolium pratense L. An average of 1.5 H. obscurus per plant can cause 5.5% reduction in forage yield in pastures of two to three years old. Moreover, insect attack can reach 70% to 100% of the plants. To our knowledge, there is no a chemical strategy for controlling this pest. Therefore alternative strategies for controlling H. obscurus are a high priority for red clover producers. One of this alternative is related to the study of secondary metabolites involved in intrinsic chemical defenses developed by plants, such as isoflavonoids. The isoflavonoids formononetin and daidzein have elicited an antifeedant and phagostimult effect on H. obscurus respectively. However, we do not know how is the dynamic variation of these isoflavonoids under field conditions. The main objective of this work was to evaluate the variation of the antifeedant isoflavonoids formononetin, the phagostimulant isoflavonoids daidzein, and their respective glycosides over time in different ecotypes of red clover. Fourteen red clover ecotypes (8 cultivars and 6 experimental lines), were collected at INIA-Carillanca (La Araucanía, Chile). These plants were established in October 2015 under irrigated conditions. The cultivars were distributed in a randomized complete block with three replicates. The whole plants were sampled in four times: 15th October 2016, 12th December 2016, 27th January 2017 and 16th March 2017 with sufficient amount of soil to avoid root damage. A polar fraction of isoflavonoid was obtained from 20 mg of lyophilized root tissue extracted with 2 mL of 80% MeOH for 16 h using an orbital shaker in the dark at room temperature. After, an aliquot of 1.4 mL of the supernatant was evaporated, and the residue was resuspended in 300 µL of 45% MeOH. The identification and quantification of isoflavonoid root extracts were performed by the injection of 20 µL into a Shimadzu HPLC equipped with a C-18 column. The sample was eluted with a mobile phase composed of AcOH: H₂O (1:9 v/v) as solvent A and CH₃CN as solvent B. The detection was performed at 260 nm. The results showed that the amount of aglycones was higher than the respective glycosides. This result is according to the biosynthetic pathway of flavonoids, where the formation of glycoside is further to the glycosides biosynthesis. The amount of formononetin was higher than daidzein. In roots, where H. obscurus spent the most part of its live cycle, the highest content of formononetin was found in G 27, Pawera, Sabtoron High, Redqueli-INIA and Superqueli-INIA cvs. (2.1, 1.8, 1.8, 1.6 and 1.0 mg g⁻¹ respectively); and the lowest amount of daidzein were found Superqueli-INIA (0.32 mg g⁻¹) and in the experimental line Sel Syn Int4 (0.24 mg g⁻¹). This ecotype showed a high content of formononetin (0.9 mg g⁻¹). This information, associated with cultural practices, could help farmers and breeders to reduce H. obscurus in grassland, selecting ecotypes with high content of formononetin and low amount of daidzein in the roots of red clover plants. Acknowledgements: FONDECYT 1141245 and 11130715.

Keywords: daidzein, formononetin, isoflavonoid glycosides, trifolium pratense

Procedia PDF Downloads 188
251 Analyzing Temperature and Pressure Performance of a Natural Air-Circulation System

Authors: Emma S. Bowers

Abstract:

Perturbations in global environments and temperatures have heightened the urgency of creating cost-efficient, energy-neutral building techniques. Structural responses to this thermal crisis have included designs (including those of the building standard PassivHaus) with airtightness, window placement, insulation, solar orientation, shading, and heat-exchange ventilators as potential solutions or interventions. Limitations in the predictability of the circulation of cooled air through the ambient temperature gradients throughout a structure are one of the major obstacles facing these enhanced building methods. A diverse range of air-cooling devices utilizing varying technologies is implemented around the world. Many of them worsen the problem of climate change by consuming energy. Using natural ventilation principles of air buoyancy and density to circulate fresh air throughout a building with no energy input can combat these obstacles. A unique prototype of an energy-neutral air-circulation system was constructed in order to investigate potential temperature and pressure gradients related to the stack effect (updraft of air through a building due to changes in air pressure). The stack effect principle maintains that since warmer air rises, it will leave an area of low pressure that cooler air will rush in to fill. The result is that warmer air will be expelled from the top of the building as cooler air is directed through the bottom, creating an updraft. Stack effect can be amplified by cooling the air near the bottom of a building and heating the air near the top. Using readily available, mostly recyclable or biodegradable materials, an insulated building module was constructed. A tri-part construction model was utilized: a subterranean earth-tube heat exchanger constructed of PVC pipe and placed in a horizontally oriented trench, an insulated, airtight cube aboveground to represent a building, and a solar chimney (painted black to increase heat in the out-going air). Pressure and temperature sensors were placed at four different heights within the module as well as outside, and data was collected for a period of 21 days. The air pressures and temperatures over the course of the experiment were compared and averaged. The promise of this design is that it represents a novel approach which directly addresses the obstacles of air flow and expense, using the physical principle of stack effect to draw a continuous supply of fresh air through the structure, using low-cost and readily available materials (and zero manufactured energy). This design serves as a model for novel approaches to creating temperature controlled buildings using zero energy and opens the door for future research into the effects of increasing module scale, increasing length and depth of the earth tube, and shading the building. (Model can be provided).

Keywords: air circulation, PassivHaus, stack effect, thermal gradient

Procedia PDF Downloads 133
250 Mycophenolate-Induced Disseminated TB in a PPD-Negative Patient

Authors: Megan L. Srinivas

Abstract:

Individuals with underlying rheumatologic diseases such as dermatomyositis may not adequately respond to tuberculin (PPD) skin tests, creating false negative results. These illnesses are frequently treated with immunosuppressive therapy making proper identification of TB infection imperative. A 59-year-old Filipino man was diagnosed with dermatomyositis on the basis of rash, electromyography, and muscle biopsy. He was initially treated with IVIG infusions and transitioned to oral prednisone and mycophenolate. The patient’s symptoms improved on this regimen. Six months after starting mycophenolate, the patient began having fevers, night sweats, and productive cough without hemoptysis. He moved from the Philippines 5 years prior to dermatomyositis diagnosis, denied sick contacts, and was PPD negative both at immigration and immediately prior to starting mycophenolate treatment. A third PPD was negative following the onset of these new symptoms. He was treated for community-acquired pneumonia, but symptoms worsened over 10 days and he developed watery diarrhea and a growing non-tender, non-mobile mass on the left side of his neck. A chest x-ray demonstrated a cavitary lesion in right upper lobe suspicious for TB that had not been present one month earlier. Chest CT corroborated this finding also exhibiting necrotic hilar and paratracheal lymphadenopathy. Neck CT demonstrated the left-sided mass as cervical chain lymphadenopathy. Expectorated sputum and stool samples contained acid-fast bacilli (AFB), cultures showing TB bacteria. Fine-needle biopsy of the neck mass (scrofula) also exhibited AFB. An MRI brain showed nodular enhancement suspected to be a tuberculoma. Mycophenolate was discontinued and dermatomyositis treatment was switched to oral prednisone with a 3-day course of IVIG. The patient’s infection showed sensitivity to standard RIPE (rifampin, isoniazid, pyrazinamide, and ethambutol) treatment. Within a week of starting RIPE, the patient’s diarrhea subsided, scrofula diminished, and symptoms significantly improved. By the end of treatment week 3, the patient’s sputum no longer contained AFB; he was removed from isolation, and was discharged to continue RIPE at home. He was discharged on oral prednisone, which effectively addressed his dermatomyositis. This case illustrates the unreliability of PPD tests in patients with long-term inflammatory diseases such as dermatomyositis. Other immunosuppressive therapies (adalimumab, etanercept, and infliximab) have been affiliated with conversion of latent TB to disseminated TB. Mycophenolate is another immunosuppressive agent with similar mechanistic properties. Thus, it is imperative that patients with long-term inflammatory diseases and high-risk TB factors initiating immunosuppressive therapy receive a TB blood test (such as a quantiferon gold assay) prior to the initiation of therapy to ensure that latent TB is unmasked before it can evolve into a disseminated form of the disease.

Keywords: dermatomyositis, immunosuppressant medications, mycophenolate, disseminated tuberculosis

Procedia PDF Downloads 181
249 An eHealth Intervention Using Accelerometer- Smart Phone-App Technology to Promote Physical Activity and Health among Employees in a Military Setting

Authors: Emilia Pietiläinen, Heikki Kyröläinen, Tommi Vasankari, Matti Santtila, Tiina Luukkaala, Kai Parkkola

Abstract:

Working in the military sets special demands on physical fitness, however, reduced physical activity levels among employees in the Finnish Defence Forces (FDF), a trend also being seen among the working-age population in Finland, is leading to reduced physical fitness levels and increased risk of cardiovascular and metabolic diseases, something which also increases human resource costs. Therefore, the aim of the present study was to develop an eHealth intervention using accelerometer- smartphone app feedback technique, telephone counseling and physical activity recordings to increase physical activity of the personnel and thereby improve their health. Specific aims were to reduce stress, improve quality of sleep and mental and physical performance, ability to work and reduce sick leave absences. Employees from six military brigades around Finland were invited to participate in the study, and finally, 260 voluntary participants were included (66 women, 194 men). The participants were randomized into intervention (156) and control groups (104). The eHealth intervention group used accelerometers measuring daily physical activity and duration and quality of sleep for six months. The accelerometers transmitted the data to smartphone apps while giving feedback about daily physical activity and sleep. The intervention group participants were also encouraged to exercise for two hours a week during working hours, a benefit that was already offered to employees following existing FDF guidelines. To separate the exercise done during working hours from the accelerometer data, the intervention group marked this exercise into an exercise diary. The intervention group also participated in telephone counseling about their physical activity. On the other hand, the control group participants continued with their normal exercise routine without the accelerometer and feedback. They could utilize the benefit of being able to exercise during working hours, but they were not separately encouraged for it, nor was the exercise diary used. The participants were measured at baseline, after the entire intervention period, and six months after the end of the entire intervention. The measurements included accelerometer recordings, biochemical laboratory tests, body composition measurements, physical fitness tests, and a wide questionnaire focusing on sociodemographic factors, physical activity and health. In terms of results, the primary indicators of effectiveness are increased physical activity and fitness, improved health status, and reduced sick leave absences. The evaluation of the present scientific reach is based on the data collected during the baseline measurements. Maintenance of the studied outcomes is assessed by comparing the results of the control group measured at the baseline and a year follow-up. Results of the study are not yet available but will be presented at the conference. The present findings will help to develop an easy and cost-effective model to support the health and working capability of employees in the military and other workplaces.

Keywords: accelerometer, health, mobile applications, physical activity, physical performance

Procedia PDF Downloads 170
248 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach

Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta

Abstract:

Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.

Keywords: compostela, literary geography, literary cartography, tourism

Procedia PDF Downloads 369
247 The Role of Virtual Reality in Mediating the Vulnerability of Distant Suffering: Distance, Agency, and the Hierarchies of Human Life

Authors: Z. Xu

Abstract:

Immersive virtual reality (VR) has gained momentum in humanitarian communication due to its utopian promises of co-presence, immediacy, and transcendence. These potential benefits have led the United Nations (UN) to tirelessly produce and distribute VR series to evoke global empathy and encourage policymakers, philanthropic business tycoons and citizens around the world to actually do something (i.e. give a donation). However, it is unclear whether or not VR can cultivate cosmopolitans with a sense of social responsibility towards the geographically, socially/culturally and morally mediated misfortune of faraway others. Drawing upon existing works on the mediation of distant suffering, this article constructs an analytical framework to articulate the issue. Applying this framework on a case study of five of the UN’s VR pieces, the article identifies three paradoxes that exist between cyber-utopian and cyber-dystopian narratives. In the “paradox of distance”, VR relies on the notions of “presence” and “storyliving” to implicitly link audiences spatially and temporally to distant suffering, creating global connectivity and reducing perceived distances between audiences and others; yet it also enables audiences to fully occupy the point of view of distant sufferers (creating too close/absolute proximity), which may cause them to feel naive self-righteousness or narcissism with their pleasures and desire, thereby destroying the “proper distance”. In the “paradox of agency”, VR simulates a superficially “real” encounter for visual intimacy, thereby establishing an “audiences–beneficiary” relationship in humanitarian communication; yet in this case the mediated hyperreality is not an authentic reality, and its simulation does not fill the gap between reality and the virtual world. In the “paradox of the hierarchies of human life”, VR enables an audience to experience virtually fundamental “freedom”, epitomizing an attitude of cultural relativism that informs a great deal of contemporary multiculturalism, providing vast possibilities for a more egalitarian representation of distant sufferers; yet it also takes the spectator’s personally empathic feelings as the focus of intervention, rather than structural inequality and political exclusion (an economic and political power relations of viewing). Thus, the audience can potentially remain trapped within the minefield of hegemonic humanitarianism. This study is significant in two respects. First, it advances the turn of digitalization in studies of media and morality in the polymedia milieu; it is motivated by the necessary call for a move beyond traditional technological environments to arrive at a more novel understanding of the asymmetry of power between the safety of spectators and the vulnerability of mediated sufferers. Second, it not only reminds humanitarian journalists and NGOs that they should not rely entirely on the richer news experience or powerful response-ability enabled by VR to gain a “moral bond” with distant sufferers, but also argues that when fully-fledged VR technology is developed, it can serve as a kind of alchemy and should not be underestimated merely as a “bugaboo” of an alarmist philosophical and fictional dystopia.

Keywords: audience, cosmopolitan, distant suffering, virtual reality, humanitarian communication

Procedia PDF Downloads 110
246 Evolution of Microstructure through Phase Separation via Spinodal Decomposition in Spinel Ferrite Thin Films

Authors: Nipa Debnath, Harinarayan Das, Takahiko Kawaguchi, Naonori Sakamoto, Kazuo Shinozaki, Hisao Suzuki, Naoki Wakiya

Abstract:

Nowadays spinel ferrite magnetic thin films have drawn considerable attention due to their interesting magnetic and electrical properties with enhanced chemical and thermal stability. Spinel ferrite magnetic films can be implemented in magnetic data storage, sensors, and spin filters or microwave devices. It is well established that the structural, magnetic and transport properties of the magnetic thin films are dependent on microstructure. Spinodal decomposition (SD) is a phase separation process, whereby a material system is spontaneously separated into two phases with distinct compositions. The periodic microstructure is the characteristic feature of SD. Thus, SD can be exploited to control the microstructure at the nanoscale level. In bulk spinel ferrites having general formula, MₓFe₃₋ₓ O₄ (M= Co, Mn, Ni, Zn), phase separation via SD has been reported only for cobalt ferrite (CFO); however, long time post-annealing is required to occur the spinodal decomposition. We have found that SD occurs in CoF thin film without using any post-deposition annealing process if we apply magnetic field during thin film growth. Dynamic Aurora pulsed laser deposition (PLD) is a specially designed PLD system through which in-situ magnetic field (up to 2000 G) can be applied during thin film growth. The in-situ magnetic field suppresses the recombination of ions in the plume. In addition, the peak’s intensity of the ions in the spectra of the plume also increases when magnetic field is applied to the plume. As a result, ions with high kinetic energy strike into the substrate. Thus, ion-impingement occurred under magnetic field during thin film growth. The driving force of SD is the ion-impingement towards the substrates that is induced by in-situ magnetic field. In this study, we report about the occurrence of phase separation through SD and evolution of microstructure after phase separation in spinel ferrite thin films. The surface morphology of the phase separated films show checkerboard like domain structure. The cross-sectional microstructure of the phase separated films reveal columnar type phase separation. Herein, the decomposition wave propagates in lateral direction which has been confirmed from the lateral composition modulations in spinodally decomposed films. Large magnetic anisotropy has been found in spinodally decomposed nickel ferrite (NFO) thin films. This approach approves that magnetic field is also an important thermodynamic parameter to induce phase separation by the enhancement of up-hill diffusion in thin films. This thin film deposition technique could be a more efficient alternative for the fabrication of self-organized phase separated thin films and employed in controlling of the microstructure at nanoscale level.

Keywords: Dynamic Aurora PLD, magnetic anisotropy, spinodal decomposition, spinel ferrite thin film

Procedia PDF Downloads 341
245 Nonlinear Optics of Dirac Fermion Systems

Authors: Vipin Kumar, Girish S. Setlur

Abstract:

Graphene has been recognized as a promising 2D material with many new properties. However, pristine graphene is gapless which hinders its direct application towards graphene-based semiconducting devices. Graphene is a zero-gapp and linearly dispersing semiconductor. Massless charge carriers (quasi-particles) in graphene obey the relativistic Dirac equation. These Dirac fermions show very unusual physical properties such as electronic, optical and transport. Graphene is analogous to two-level atomic systems and conventional semiconductors. We may expect that graphene-based systems will also exhibit phenomena that are well-known in two-level atomic systems and in conventional semiconductors. Rabi oscillation is a nonlinear optical phenomenon well-known in the context of two-level atomic systems and also in conventional semiconductors. It is the periodic exchange of energy between the system of interest and the electromagnetic field. The present work describes the phenomenon of Rabi oscillations in graphene based systems. Rabi oscillations have already been described theoretically and experimentally in the extensive literature available on this topic. To describe Rabi oscillations they use an approximation known as rotating wave approximation (RWA) well-known in studies of two-level systems. RWA is valid only near conventional resonance (small detuning)- when the frequency of the external field is nearly equal to the particle-hole excitation frequency. The Rabi frequency goes through a minimum close to conventional resonance as a function of detuning. Far from conventional resonance, the RWA becomes rather less useful and we need some other technique to describe the phenomenon of Rabi oscillation. In conventional systems, there is no second minimum - the only minimum is at conventional resonance. But in graphene we find anomalous Rabi oscillations far from conventional resonance where the Rabi frequency goes through a minimum that is much smaller than the conventional Rabi frequency. This is known as anomalous Rabi frequency and is unique to graphene systems. We have shown that this is attributable to the pseudo-spin degree of freedom in graphene systems. A new technique, which is an alternative to RWA called asymptotic RWA (ARWA), has been invoked by our group to discuss the phenomenon of Rabi oscillation. Experimentally accessible current density shows different types of threshold behaviour in frequency domain close to the anomalous Rabi frequency depending on the system chosen. For single layer graphene, the exponent at threshold is equal to 1/2 while in case of bilayer graphene, it is computed to be equal to 1. Bilayer graphene shows harmonic (anomalous) resonances absent in single layer graphene. The effect of asymmetry and trigonal warping (a weak direct inter-layer hopping in bilayer graphene) on these oscillations is also studied in graphene systems. Asymmetry has a remarkable effect only on anomalous Rabi oscillations whereas the Rabi frequency near conventional resonance is not significantly affected by the asymmetry parameter. In presence of asymmetry, these graphene systems show Rabi-like oscillations (offset oscillations) even for vanishingly small applied field strengths (less than the gap parameter). The frequency of offset oscillations may be identified with the asymmetry parameter.

Keywords: graphene, Bilayer graphene, Rabi oscillations, Dirac fermion systems

Procedia PDF Downloads 268
244 Effect of Fertilization and Combined Inoculation with Azospirillum brasilense and Pseudomonas fluorescens on Rhizosphere Microbial Communities of Avena sativa (Oats) and Secale Cereale (Rye) Grown as Cover Crops

Authors: Jhovana Silvia Escobar Ortega, Ines Eugenia Garcia De Salamone

Abstract:

Cover crops are an agri-technological alternative to improve all properties of soils. Cover crops such as oats and rye could be used to reduce erosion and favor system sustainability when they are grown in the same agricultural cycle of the soybean crop. This crop is very profitable but its low contribution of easily decomposable residues, due to its low C/N ratio, leaves the soil exposed to erosive action and raises the need to reduce its monoculture. Furthermore, inoculation with the plant growth promoting rhizobacteria contributes to the implementation, development and production of several cereal crops. However, there is little information on its effects on forage crops which are often used as cover crops to improve soil quality. In order to evaluate the effect of combined inoculation with Azospirillum brasilense and Pseudomonas fluorescens on rhizosphere microbial communities, field experiments were conducted in the west of Buenos Aires province, Argentina, with a split-split plot randomized complete block factorial design with three replicates. The factors were: type of cover crop, inoculation and fertilization. In the main plot two levels of fertilization 0 and 7 40-0-5 (NPKS) were established at sowing. Rye (Secale cereale cultivar Quehué) and oats (Avena sativa var Aurora.) were sown in the subplots. In the sub-subplots two inoculation treatments are applied without and with application of a combined inoculant with A. brasilense and P. fluorescens. Due to the growth of cover crops has to be stopped usually with the herbicide glyphosate, rhizosphere soil of 0-20 and 20-40 cm layers was sampled at three sampling times which were: before glyphosate application (BG), a month after glyphosate application (AG) and at soybean harvest (SH). Community level of physiological profiles (CLPP) and Shannon index of microbial diversity (H) were obtained by multivariate analysis of Principal Components. Also, the most probable number (MPN) of nitrifiers and cellulolytics were determined using selective liquid media for each functional group. The CLPP of rhizosphere microbial communities showed significant differences between sampling times. There was not interaction between sampling times and both, types of cover crops and inoculation. Rhizosphere microbial communities of samples obtained BG had different CLPP with respect to the samples obtained in the sampling times AG and SH. Fertilizer and depth of sampling also caused changes in the CLPP. The H diversity index of rhizosphere microbial communities of rye in the sampling time BG were higher than those associated with oats. The MPN of both microbial functional types was lower in the deeper layer since these microorganisms are mostly aerobic. The MPN of nitrifiers decreased in rhizosphere of both cover crops only AG. At the sampling time BG, the NMP of both microbial types were larger than those obtained for AG and SH. This may mean that the glyphosate application could cause fairly permanent changes in these microbial communities which can be considered bio-indicators of soil quality. Inoculation and fertilizer inputs could be included to improve management of these cover crops because they can have a significant positive effect on the sustainability of the agro-ecosystem.

Keywords: community level of physiological profiles, microbial diversity, plant growth promoting rhizobacteria, rhizosphere microbial communities, soil quality, system sustainability

Procedia PDF Downloads 374
243 Prostheticly Oriented Approach for Determination of Fixture Position for Facial Prostheses Retention in Cases with Atypical and Combined Facial Defects

Authors: K. A.Veselova, N. V.Gromova, I. N.Antonova, I. N. Kalakutskii

Abstract:

There are many diseases and incidents that may result facial defects and deformities: cancer, trauma, burns, congenital anomalies, and autoimmune diseases. In some cases, patient may acquire atypically extensive facial defect, including more than one anatomical region or, by contrast, atypically small defect (e.g. partial auricular defect). The anaplastology gives us opportunity to help patient with facial disfigurement in cases when plastic surgery is contraindicated. Using of implant retention for facial prosthesis is strongly recommended because improves both aesthetic and functional results and makes using of the prosthesis more comfortable. Prostheticly oriented fixture position is extremely important for aesthetic and functional long-term result; however, the optimal site for fixture placement is not clear in cases with atypical configuration of facial defect. The objective of this report is to demonstrate challenges in fixture position determination we have faced with and offer the solution. In this report, four cases of implant-supported facial prosthesis are described. Extra-oral implants with four millimeter length were used in all cases. The decision regarding the quantity of surgical stages was based on anamnesis of disease. Facial prostheses were manufactured according to conventional technique. Clinical and technological difficulties and mistakes are described, and prostheticly oriented approach for determination of fixture position is demonstrated. The case with atypically large combined orbital and nasal defect resulting after arteriovenous malformation is described: the correct positioning of artificial eye was impossible due to wrong position of the fixture (with suprastructure) located in medial aspect of supraorbital rim. The suprastructure was unfixed and this fixture wasn`t used for retention in order to achieve appropriate artificial eye placement and better aesthetic result. In other case with small partial auricular defect (only helix and antihelix were absent) caused by squamoized cell carcinoma T1N0M0 surgical template was used to avoid the difficulties. To achieve the prostheticly oriented fixture position in case of extremely small defect the template was made on preliminary cast using vacuum thermoforming method. Two radiopaque markers were incorporated into template in preferable for fixture placement positions taking into account future prosthesis configuration. The template was put on remaining ear and cone-beam CT was performed to insure, that the amount of bone is enough for implant insertion in preferable position. Before the surgery radiopaque markers were extracted and template was holed for guide drill. Fabrication of implant-retained facial prostheses gives us opportunity to improve aesthetics, retention and patients’ quality of life. But every inaccuracy in planning leads to challenges on surgery and prosthetic stages. Moreover, in cases with atypically small or extended facial defects prostheticly oriented approach for determination of fixture position is strongly required. The approach including surgical template fabrication is effective, easy and cheap way to avoid mistakes and unpredictable result.

Keywords: anaplastology, facial prosthesis, implant-retained facial prosthesis., maxillofacil prosthese

Procedia PDF Downloads 76
242 Explosive Clad Metals for Geothermal Energy Recovery

Authors: Heather Mroz

Abstract:

Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.

Keywords: clad metal, explosion welding, separator material, well casing material, piping material

Procedia PDF Downloads 137
241 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 28
240 Geodynamic Evolution of the Tunisian Dorsal Backland (Central Mediterranean) from the Cenozoic to Present

Authors: Aymen Arfaoui, Abdelkader Soumaya, Noureddine Ben Ayed

Abstract:

The study region is located in the Tunisian Dorsal Backland (Central Mediterranean), which is the easternmost part of the Saharan Atlas mountain range, trending southwest-northeast. Based on our fieldwork, seismic tomography images, seismicity, and previous studies, we propose an interpretation of the relationship between the surface deformation and fault kinematics in the study area and the internal dynamic processes acting in the Central Mediterranean from the Cenozoic to the present. The subduction and dynamics of internal forces beneath the complicated Maghrebides mobile belt have an impact on the Tertiary and Quaternary tectonic regimes in the Pelagian and Atlassic foreland that is part of our study region. The left lateral reactivation of the major "Tunisian N-S Axis fault" and the development of a compressional relay between the Hammamet Korbous and Messella-Ressas faults are possibly a result of tectonic stresses due to the slab roll-back following the Africa/Eurasia convergence. After the slab segmentation and its eastward migration (5–4 Ma) and the formation of the Strait of Sicily "rift zone" further east, a transtensional tectonic regime has been installed in this area. According to seismic tomography images, the STEP fault of the "North-South Axis" at Hammamet-Korbous coincides with the western edge of the "Slab windows" of the Sicilian Channel and the eastern boundary of the positive anomalies attributed to the residual Slab of Tunisia. On the other hand, significant E-W Plio-Quaternary tectonic activity may be observed along the eastern portion of this STEP fault system in the Grombalia zone as a result of recent vertical lithospheric motion in response to the lateral slab migration eastward to Sicily Channel. According to SKS fast splitting directions, the upper mantle flow pattern beneath Tunisian Dorsal is parallel to the NE-SW to E-W orientation of the Shmin identified in the study area, similar to the Plio-Quaternary extensional orientation in the Central Mediterranean. Additionally, the removal of the lithosphere and the subsequent uplift of the sub-lithospheric mantle beneath the topographic highs of the Dorsal and its surroundings may be the cause of the dominant extensional to transtensional Quaternary regime. The occurrence of strike-slip and extensional seismic events in the Pelagian block reveals that the regional transtensional tectonic regime persists today. Finally, we believe that the geodynamic history of the study area since the Cenozoic is primarily influenced by the preexisting weak zones, the African slab detachment, and the upper mantle flow pattern in the central Mediterranean.

Keywords: Tunisia, lithospheric discontinuity (STEP fault), geodynamic evolution, Tunisian dorsal backland, strike-slip fault, seismic tomography, seismicity, central Mediterranean

Procedia PDF Downloads 45
239 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 259
238 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo

Authors: Shpetim Rezniqi

Abstract:

Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.

Keywords: globalisation, finance, crisis, recomandation, bank, credits

Procedia PDF Downloads 359
237 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches

Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys

Abstract:

Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.

Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites

Procedia PDF Downloads 174
236 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation

Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang

Abstract:

The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.

Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics

Procedia PDF Downloads 111
235 Healthcare Associated Infections in an Intensive Care Unit in Tunisia: Incidence and Risk Factors

Authors: Nabiha Bouafia, Asma Ben Cheikh, Asma Ammar, Olfa Ezzi, Mohamed Mahjoub, Khaoula Meddeb, Imed Chouchene, Hamadi Boussarsar, Mansour Njah

Abstract:

Background: Hospital acquired infections (HAI) cause significant morbidity, mortality, length of stay and hospital costs, especially in the intensive care unit (ICU), because of the debilitated immune systems of their patients and exposure to invasive devices. The aims of this study were to determine the rate and the risk factors of HAI in an ICU of a university hospital in Tunisia. Materials/Methods: A prospective study was conducted in the 8-bed adult medical ICU of a University Hospital (Sousse Tunisia) during 14 months from September 15th, 2015 to November 15th, 2016. Patients admitted for more than 48h were included. Their surveillance was stopped after the discharge from ICU or death. HAIs were defined according to standard Centers for Disease Control and Prevention criteria. Risk factors were analyzed by conditional stepwise logistic regression. The p-value of < 0.05 was considered significant. Results: During the study, 192 patients had admitted for more than 48 hours. Their mean age was 59.3± 18.20 years and 57.1% were male. Acute respiratory failure was the main reason of admission (72%). The mean SAPS II score calculated at admission was 32.5 ± 14 (range: 6 - 78). The exposure to the mechanical ventilation (MV) and the central venous catheter were observed in 169 (88 %) and 144 (75 %) patients, respectively. Seventy-three patients (38.02%) developed 94 HAIs. The incidence density of HAIs was 41.53 per 1000 patient day. Mortality rate in patients with HAIs was 65.8 %( n= 48). Regarding the type of infection, Ventilator Associated Pneumoniae (VAP) and central venous catheter Associated Infections (CVC AI) were the most frequent with Incidence density: 14.88/1000 days of MV for VAP and 20.02/1000 CVC days for CVC AI. There were 5 Peripheral Venous Catheter Associated Infections, 2 urinary tract infections, and 21 other HAIs. Gram-negative bacteria were the most common germs identified in HAIs: Multidrug resistant Acinetobacter Baumanii (45%) and Klebsiella pneumoniae (10.96%) were the most frequently isolated. Univariate analysis showed that transfer from another hospital department (p= 0.001), intubation (p < 10-4), tracheostomy (p < 10-4), age (p=0.028), grade of acute respiratory failure (p=0.01), duration of sedation (p < 10-4), number of CVC (p < 10-4), length of mechanical ventilation (p < 10-4) and length of stay (p < 10-4), were associated to high risk of HAIS in ICU. Multivariate analysis reveals that independent risk factors for HAIs are: transfer from another hospital department: OR=13.44, IC 95% [3.9, 44.2], p < 10-4, duration of sedation: OR= 1.18, IC 95% [1.049, 1.325], p=0.006, high number of CVC: OR=2.78, IC 95% [1.73, 4.487], p < 10-4, and length of stay in ICU: OR= 1.14, IC 95% [1.066,1.22], p < 10-4. Conclusion: Prevention of nosocomial infections in ICUs is a priority of health care systems all around the world. Yet, their control requires an understanding of epidemiological data collected in these units.

Keywords: healthcare associated infections, incidence, intensive care unit, risk factors

Procedia PDF Downloads 350
234 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 80
233 Robotics Education Continuity from Diaper Age to Doctorate

Authors: Vesa Salminen, Esa Santakallio, Heikki Ruohomaa

Abstract:

Introduction: The city of Riihimäki has decided robotics on well-being, service and industry as the main focus area on their ecosystem strategy. Robotics is going to be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, also education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The objective of this activity has been to develop education continuity from diaper age to doctorate. The main target of the development activity is to create a unique robotics study entity that enables ongoing robotics studies from preprimary education to university. The aim is also to attract students internationally and supply a skilled workforce to the private sector, capable of the challenges of the future. Methodology: Education instances (high school, second grade, Universities on all levels) in a large area of Tavastia Province have gradually directed their education programs to support this goal. On the other hand, applied research projects have been created to make proof of concept- phases on areal real environment field labs to test technology opportunities and digitalization to change business processes by applying robotic solutions. Customer-oriented applied research projects offer for students in robotics education learning environments to learn new knowledge and content. That is also a learning environment for education programs to adapt and co-evolution. New content and problem-based learning are used in future education modules. Major findings: Joint robotics education entity is being developed in cooperation with the city of Riihimäki (primary education), Syria Education (secondary education) and HAMK (bachelor and master education). The education modules have been developed to enable smooth transitioning from one institute to another. This article is introduced a case study of the change of education of wellbeing education because of digitalization and robotics. Riihimäki's Elderly citizen's service house, Riihikoti, has been working as a field lab for proof-of-concept phases on testing technology opportunities. According to successful case studies also education programs on various levels of education have been changing. Riihikoti has been developed as a physical learning environment for home care and robotics, investigating and developing a variety of digital devices and service opportunities and experimenting and learn the use of equipment. The environment enables the co-development of digital service capabilities in the authentic environment for all interested groups in transdisciplinary cooperation.

Keywords: ecosystem strategy, digitalization and robotics, education continuity, learning environment, transdisciplinary co-operation

Procedia PDF Downloads 149
232 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 91
231 Academia as Creator of Emerging, Innovative Communities of Practice and Learning

Authors: Francisco Julio Batle Lorente

Abstract:

The present paper aims at presenting a new category of role for academia: proactive creator/promoter of communities of practice in emerging areas of innovation. It is based in research among practitioners in three different areas: social entrepreneurship, alumni engaged in entrepreneurship and innovation, and digital nomads. The concept of CoP is related to an intentionally created space to share experiences and collectively reflect on the cases arising from practice. Such an endeavour is not contemplated in the literature on academic roles in an explicit way. The goal of the paper is providing a framework for this function and throw some light on the perception and priorities of members of emerging communities (78 alumni, 154 social entrepreneurs, and 231 digital nomads) regarding community, learning, engagement, and networking, areas in which the university can help and, by doing so, contributing to signal the emerging area and creating new opportunities for the academia. The research methodology was based in Survey research. It is a specific type of field study that involves the collection of data from a sample of elements drawn from a well-defined population through the use of a questionnaire. It was considered that survey research might be valuable to the present project and help outline the utility of various study designs and future projects with the emerging communities that are the object of the investigation. Open questions were used for different topics, as well as critical incident technique. It was used a standard technique for survey sampling and questionnaire design. Finally, it was defined a procedure for pretesting questionnaires and for data collection. The questionnaire was channelled by means of google forms. The results indicate that the members of emerging, innovative CoPs and learning such the ones that were selected for this investigation lack cohesion, inspiration, networking, opportunities for creation of social capital, opportunities for collaboration beyond their existing and close network. The opportunity that arises for the academia from proactively helping articulate CoP (and Communities of learning) are related to key elements of any CoP/ CoL: community construction approaches, technological infrastructure, benefits, participation issues and urgent challenges, trust, networking, technical ability/training/development and collaboration. Beyond training, other three areas (networking, collaboration and urgent challenges) were the ones in which the contribution of universities to the communities were considered more interesting and workable to practitioners. The analysis of the responses for the open questions related to perception of the universities offer options for terra incognita to be explored for universities (signalling new areas, establishing broader collaborations with research, government, media and corporations, attracting investment). Based on the findings from this research, there is some evidence that CoPs can offer a formal and informal method of professional and interprofessional development for member of any emerging and innovative community and can decrease social and professional isolation. The opportunity that it offers to academia can increase the entrepreneurial and engaged university identity. It also moves to academia into a realm of civic confrontation of present and future challenges in a more proactive way.

Keywords: social innovation, new roles of academia, community of learning, community of practice

Procedia PDF Downloads 52
230 Different Types of Bismuth Selenide Nanostructures for Targeted Applications: Synthesis and Properties

Authors: Jana Andzane, Gunta Kunakova, Margarita Baitimirova, Mikelis Marnauza, Floriana Lombardi, Donats Erts

Abstract:

Bismuth selenide (Bi₂Se₃) is known as a narrow band gap semiconductor with pronounced thermoelectric (TE) and topological insulator (TI) properties. Unique TI properties offer exciting possibilities for fundamental research as observing the exciton condensate and Majorana fermions, as well as practical application in spintronic and quantum information. In turn, TE properties of this material can be applied for wide range of thermoelectric applications, as well as for broadband photodetectors and near-infrared sensors. Nanostructuring of this material results in improvement of TI properties due to suppression of the bulk conductivity, and enhancement of TE properties because of increased phonon scattering at the nanoscale grains and interfaces. Regarding TE properties, crystallographic growth direction, as well as orientation of the nanostructures relative to the growth substrate, play significant role in improvement of TE performance of nanostructured material. For instance, Bi₂Se₃ layers consisting of randomly oriented nanostructures and/or of combination of them with planar nanostructures show significantly enhanced in comparison with bulk and only planar Bi₂Se₃ nanostructures TE properties. In this work, a catalyst-free vapour-solid deposition technique was applied for controlled obtaining of different types of Bi₂Se₃ nanostructures and continuous nanostructured layers for targeted applications. For example, separated Bi₂Se₃ nanoplates, nanobelts and nanowires can be used for investigations of TI properties; consisting from merged planar and/or randomly oriented nanostructures Bi₂Se₃ layers are useful for applications in heat-to-power conversion devices and infrared detectors. The vapour-solid deposition was carried out using quartz tube furnace (MTI Corp), equipped with an inert gas supply and pressure/temperature control system. Bi₂Se₃ nanostructures/nanostructured layers of desired type were obtained by adjustment of synthesis parameters (process temperature, deposition time, pressure, carrier gas flow) and selection of deposition substrate (glass, quartz, mica, indium-tin-oxide, graphene and carbon nanotubes). Morphology, structure and composition of obtained Bi₂Se₃ nanostructures and nanostructured layers were inspected using SEM, AFM, EDX and HRTEM techniques, as well as home-build experimental setup for thermoelectric measurements. It was found that introducing of temporary carrier gas flow into the process tube during the synthesis and deposition substrate choice significantly influence nanostructures formation mechanism. Electrical, thermoelectric, and topological insulator properties of different types of deposited Bi₂Se₃ nanostructures and nanostructured coatings are characterized as a function of thickness and discussed.

Keywords: bismuth seleinde, nanostructures, topological insulator, vapour-solid deposition

Procedia PDF Downloads 207
229 Tailoring Quantum Oscillations of Excitonic Schrodinger’s Cats as Qubits

Authors: Amit Bhunia, Mohit Kumar Singh, Maryam Al Huwayz, Mohamed Henini, Shouvik Datta

Abstract:

We report [https://arxiv.org/abs/2107.13518] experimental detection and control of Schrodinger’s Cat like macroscopically large, quantum coherent state of a two-component Bose-Einstein condensate of spatially indirect electron-hole pairs or excitons using a resonant tunneling diode of III-V Semiconductors. This provides access to millions of excitons as qubits to allow efficient, fault-tolerant quantum computation. In this work, we measure phase-coherent periodic oscillations in photo-generated capacitance as a function of an applied voltage bias and light intensity over a macroscopically large area. Periodic presence and absence of splitting of excitonic peaks in the optical spectra measured by photocapacitance point towards tunneling induced variations in capacitive coupling between the quantum well and quantum dots. Observation of negative ‘quantum capacitance’ due to a screening of charge carriers by the quantum well indicates Coulomb correlations of interacting excitons in the plane of the sample. We also establish that coherent resonant tunneling in this well-dot heterostructure restricts the available momentum space of the charge carriers within this quantum well. Consequently, the electric polarization vector of the associated indirect excitons collective orients along the direction of applied bias and these excitons undergo Bose-Einstein condensation below ~100 K. Generation of interference beats in photocapacitance oscillation even with incoherent white light further confirm the presence of stable, long-range spatial correlation among these indirect excitons. We finally demonstrate collective Rabi oscillations of these macroscopically large, ‘multipartite’, two-level, coupled and uncoupled quantum states of excitonic condensate as qubits. Therefore, our study not only brings the physics and technology of Bose-Einstein condensation within the reaches of semiconductor chips but also opens up experimental investigations of the fundamentals of quantum physics using similar techniques. Operational temperatures of such two-component excitonic BEC can be raised further with a more densely packed, ordered array of QDs and/or using materials having larger excitonic binding energies. However, fabrications of single crystals of 0D-2D heterostructures using 2D materials (e.g. transition metal di-chalcogenides, oxides, perovskites etc.) having higher excitonic binding energies are still an open challenge for semiconductor optoelectronics. As of now, these 0D-2D heterostructures can already be scaled up for mass production of miniaturized, portable quantum optoelectronic devices using the existing III-V and/or Nitride based semiconductor fabrication technologies.

Keywords: exciton, Bose-Einstein condensation, quantum computation, heterostructures, semiconductor Physics, quantum fluids, Schrodinger's Cat

Procedia PDF Downloads 159