Search results for: urban consumption space
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9603

Search results for: urban consumption space

393 Nephrotoxicity and Hepatotoxicity Induced by Chronic Aluminium Exposure in Rats: Impact of Nutrients Combination versus Social Isolation and Protein Malnutrition

Authors: Azza A. Ali, Doaa M. Abd El-Latif, Amany M. Gad, Yasser M. A. Elnahas, Karema Abu-Elfotuh

Abstract:

Background: Exposure to Aluminium (Al) has been increased recently. It is found in food products, food additives, drinking water, cosmetics and medicines. Chronic consumption of Al causes oxidative stress and has been implicated in several chronic disorders. Liver is considered as the major site for detoxification while kidney is involved in the elimination of toxic substances and is a target organ of metal toxicity. Social isolation (SI) or protein malnutrition (PM) also causes oxidative stress and has negative impact on Al-induced nephrotoxicity as well as hepatotoxicity. Coenzyme Q10 (CoQ10) is a powerful intracellular antioxidant with mitochondrial membrane stabilizing ability while wheat grass is a natural product with antioxidant, anti-inflammatory and different protective activities, cocoa is also potent antioxidants and can protect against many diseases. They provide different degrees of protection from the impact of oxidative stress. Objective: To study the impact of social isolation together with Protein malnutrition on nephro- and hepato-toxicity induced by chronic Al exposure in rats as well as to investigate the postulated protection using a combination of Co Q10, wheat grass and cocoa. Methods: Eight groups of rats were used; four served as protected groups and four as un-protected. Each of them received daily for five weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except one group served as control. Al-toxicity model groups were divided to Al-toxicity alone, SI- associated PM (10% casein diet) and Al- associated SI&PM groups. Protection was induced by oral co-administration of CoQ10 (200mg/kg), wheat grass (100mg/kg) and cocoa powder (24mg/kg) combination together with Al. Biochemical changes in total bilirubin, lipids, cholesterol, triglycerides, glucose, proteins, creatinine and urea as well as alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), lactate deshydrogenase (LDH) were measured in serum of all groups. Specimens of kidney and liver were used for assessment of oxidative parameters (MDA, SOD, TAC, NO), inflammatory mediators (TNF-α, IL-6β, nuclear factor kappa B (NF-κB), Caspase-3) and DNA fragmentation in addition to evaluation of histopathological changes. Results: SI together with PM severely enhanced nephro- and hepato-toxicity induced by chronic Al exposure. Co Q10, wheat grass and cocoa combination showed clear protection against hazards of Al exposure either alone or when associated with SI&PM. Their protection were indicated by the significant decrease in Al-induced elevations in total bilirubin, lipids, cholesterol, triglycerides, glucose, creatinine and urea levels as well as ALT, AST, ALP, LDH. Liver and kidney of the treated groups also showed significant decrease in MDA, NO, TNF-α, IL-6β, NF-κB, caspase-3 and DNA fragmentation, together with significant increase in total proteins, SOD and TAC. Biochemical results were confirmed by the histopathological examinations. Conclusion: SI together with PM represents a risk factor in enhancing nephro- and hepato-toxicity induced by Al in rats. CoQ10, wheat grass and cocoa combination provide clear protection against nephro- and hepatotoxicity as well as the consequent degenerations induced by chronic Al-exposure even when associated with the risk of SI together with PM.

Keywords: aluminum, nephrotoxicity, hepatotoxicity, isolation and protein malnutrition, coenzyme Q10, wheatgrass, cocoa, nutrients combinations

Procedia PDF Downloads 235
392 Multisensory Science, Technology, Engineering and Mathematics Learning: Combined Hands-on and Virtual Science for Distance Learners of Food Chemistry

Authors: Paulomi Polly Burey, Mark Lynch

Abstract:

It has been shown that laboratory activities can help cement understanding of theoretical concepts, but it is difficult to deliver such an activity to an online cohort and issues such as occupational health and safety in the students’ learning environment need to be considered. Chemistry, in particular, is one of the sciences where practical experience is beneficial for learning, however typical university experiments may not be suitable for the learning environment of a distance learner. Food provides an ideal medium for demonstrating chemical concepts, and along with a few simple physical and virtual tools provided by educators, analytical chemistry can be experienced by distance learners. Food chemistry experiments were designed to be carried out in a home-based environment that 1) Had sufficient scientific rigour and skill-building to reinforce theoretical concepts; 2) Were safe for use at home by university students and 3) Had the potential to enhance student learning by linking simple hands-on laboratory activities with high-level virtual science. Two main components of the resources were developed, a home laboratory experiment component, and a virtual laboratory component. For the home laboratory component, students were provided with laboratory kits, as well as a list of supplementary inexpensive chemical items that they could purchase from hardware stores and supermarkets. The experiments used were typical proximate analyses of food, as well as experiments focused on techniques such as spectrophotometry and chromatography. Written instructions for each experiment coupled with video laboratory demonstrations were used to train students on appropriate laboratory technique. Data that students collected in their home laboratory environment was collated across the class through shared documents, so that the group could carry out statistical analysis and experience a full laboratory experience from their own home. For the virtual laboratory component, students were able to view a laboratory safety induction and advised on good characteristics of a home laboratory space prior to carrying out their experiments. Following on from this activity, students observed laboratory demonstrations of the experimental series they would carry out in their learning environment. Finally, students were embedded in a virtual laboratory environment to experience complex chemical analyses with equipment that would be too costly and sensitive to be housed in their learning environment. To investigate the impact of the intervention, students were surveyed before and after the laboratory series to evaluate engagement and satisfaction with the course. Students were also assessed on their understanding of theoretical chemical concepts before and after the laboratory series to determine the impact on their learning. At the end of the intervention, focus groups were run to determine which aspects helped and hindered learning. It was found that the physical experiments helped students to understand laboratory technique, as well as methodology interpretation, particularly if they had not been in such a laboratory environment before. The virtual learning environment aided learning as it could be utilized for longer than a typical physical laboratory class, thus allowing further time on understanding techniques.

Keywords: chemistry, food science, future pedagogy, STEM education

Procedia PDF Downloads 149
391 EEG and DC-Potential Level Сhanges in the Elderly

Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova

Abstract:

In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.

Keywords: brain, DC-potential level, EEG, elderly people

Procedia PDF Downloads 466
390 Preparedness Level of Disaster Management Institutions in Context of Floods in Delhi

Authors: Aditi Madan, Jayant Kumar Routray

Abstract:

Purpose: Over the years flood related risks have compounded due to increasing vulnerability caused by rapid urbanisation and growing population. This increase is an indication of the need for enhancing the preparedness of institutions to respond to floods. The study describes disaster management structure and its linkages with institutions involved in managing disasters. It addresses issues and challenges associated with readiness of disaster management institutions to respond to floods. It suggests policy options for enhancing the current state of readiness of institutions to respond by considering factors like institutional, manpower, financial, technical, leadership & networking, training and awareness programs, monitoring and evaluation. Methodology: The study is based on qualitative data with statements and outputs from primary and secondary sources to understand the institutional framework for disaster management in India. Primary data included field visits, interviews with officials from institutions managing disasters and the affected community to identify the challenges faced in engaging national, state, district and local level institutions in managing disasters. For focus group discussions, meetings were held with district project officers and coordinators, local officials, community based organisation, civil defence volunteers and community heads. These discussions were held to identify the challenges associated with preparedness to respond of institutions to floods. Findings: Results show that disasters are handled by district authority and the role of local institutions is limited to a reactive role during disaster. Data also indicates that although the existing institutional setup is well coordinated at the district level but needs improvement at the local level. Wide variations exist in awareness and perception among the officials engaged in managing disasters. Additionally, their roles and responsibilities need to be clearly defined with adequate budget and dedicated permanent staff for managing disasters. Institutions need to utilise the existing manpower through proper delegation of work. Originality: The study suggests that disaster risk reduction needs to focus more towards inclusivity of the local urban bodies. Wide variations exist in awareness and perception among the officials engaged in managing disasters. In order to ensure community participation, it is important to address their social and economic problems since such issues can overshadow attempts made for reducing risks. Thus, this paper suggests development of direct linkages among institutions and community for enhancing preparedness to respond to floods.

Keywords: preparedness, response, disaster, flood, community, institution

Procedia PDF Downloads 216
389 Structure Domains Tuning Magnetic Anisotropy and Motivating Novel Electric Behaviors in LaCoO₃ Films

Authors: Dechao Meng, Yongqi Dong, Qiyuan Feng, Zhangzhang Cui, Xiang Hu, Haoliang Huang, Genhao Liang, Huanhua Wang, Hua Zhou, Hawoong Hong, Jinghua Guo, Qingyou Lu, Xiaofang Zhai, Yalin Lu

Abstract:

Great efforts have been taken to reveal the intrinsic origins of emerging ferromagnetism (FM) in strained LaCoO₃ (LCO) films. However, some macro magnetic performances of LCO are still not well understood and even controversial, such as magnetic anisotropy. Determining and understanding magnetic anisotropy might help to find the true causes of FM in turn. Perpendicular magnetic anisotropy (PMA) was the first time to be directly observed in high-quality LCO films with different thickness. The in-plane (IP) and out of plane (OOP) remnant magnetic moment ratio of 30 unit cell (u.c.) films is as large as 20. The easy axis lays in the OOP direction with an IP/OOP coercive field ratio of 10. What's more, the PMA could be simply tuned by changing the thickness. With the thickness increases, the IP/OOP magnetic moment ratio remarkably decrease with magnetic easy axis changing from OOP to IP. Such a huge and tunable PMA performance exhibit strong potentials in fundamental researches or applications. What causes PMA is the first concern. More OOP orbitals occupation may be one of the micro reasons of PMA. A cluster-like magnetic domain pattern was found in 30 u.c. with no obvious color contrasts, similar to that of LaAlO₃/SrTiO₃ films. And the nanosize domains could not be totally switched even at a large OOP magnetic field of 23 T. It indicates strong IP characters or none OOP magnetism of some clusters. The IP magnetic domains might influence the magnetic performance and help to form PMA. Meanwhile some possible nonmagnetic clusters might be the reason why the measured moments of LCO films are smaller than the calculated values 2 μB/Co, one of the biggest confusions in LCO films.What tunes PMA seems much more interesting. Totally different magnetic domain patterns were found in 180 u.c. films with cluster magnetic domains surrounded by < 110 > cross-hatch lines. These lines were regarded as structure domain walls (DWs) determined by 3D reciprocal space mapping (RSM). Two groups of in-plane features with fourfold symmetry were observed near the film diffraction peaks in (002) 3D-RSM. One is along < 110 > directions with a larger intensity, which is well match the lines on the surfaces. The other is much weaker and along < 100 > directions, which is from the normal lattice titling of films deposited on cubic substrates. The < 110 > domain features obtained from (103) and (113) 3D-RSMs exhibit similar evolution of the DWs percentages and magnetic behavior. Structure domains and domain walls are believed to tune PMA performances by transform more IP magnetic moments to OOP. Last but not the least, thick films with lots of structure domains exhibit different electrical transport behaviors. A metal-to-insulator transition (MIT) and an angular dependent negative magnetic resistivity were observed near 150 K, higher than FM transition temperature but similar to that of spin-orbital coupling related 1/4 order diffraction peaks.

Keywords: structure domain, magnetic anisotropy, magnetic domain, domain wall, 3D-RSM, strain

Procedia PDF Downloads 136
388 Understanding Stock-Out of Pharmaceuticals in Timor-Leste: A Case Study in Identifying Factors Impacting on Pharmaceutical Quantification in Timor-Leste

Authors: Lourenco Camnahas, Eileen Willis, Greg Fisher, Jessie Gunson, Pascale Dettwiller, Charlene Thornton

Abstract:

Stock-out of pharmaceuticals is a common issue at all level of health services in Timor-Leste, a small post-conflict country. This lead to the research questions: what are the current methods used to quantify pharmaceutical supplies; what factors contribute to the on-going pharmaceutical stock-out? The study examined factors that influence the pharmaceutical supply chain system. Methodology: Privett and Goncalvez dependency model has been adopted for the design of the qualitative interviews. The model examines pharmaceutical supply chain management at three management levels: management of individual pharmaceutical items, health facilities, and health systems. The interviews were conducted in order to collect information on inventory management, logistics management information system (LMIS) and the provision of pharmaceuticals. Andersen' behavioural model for healthcare utilization also informed the interview schedule, specifically factors linked to environment (healthcare system and external environment) and the population (enabling factors). Forty health professionals (bureaucrats, clinicians) and six senior officers from a United Nations Agency, a global multilateral agency and a local non-governmental organization were interviewed on their perceptions of factors (healthcare system/supply chain and wider environment) impacting on stock out. Additionally, policy documents for the entire healthcare system, along with population data were collected. Findings: An analysis using Pozzebon’s critical interpretation identified a range of difficulties within the system from poor coordination to failure to adhere to policy guidelines along with major difficulties with inventory management, quantification, forecasting, and budgetary constraints. Weak logistics management information system, lack of capacity in inventory management, monitoring and supervision are additional organizational factors that also contributed to the issue. There were various methods of quantification of pharmaceuticals applied in the government sector, and non-governmental organizations. Lack of reliable data is one of the major problems in the pharmaceutical provision. Global Fund has the best quantification methods fed by consumption data and malaria cases. There are other issues that worsen stock-out: political intervention, work ethic and basic infrastructure such as unreliable internet connectivity. Major issues impacting on pharmaceutical quantification have been identified. However, current data collection identified limitations within the Andersen model; specifically, a failure to take account of predictors in the healthcare system and the environment (culture/politics/social. The next step is to (a) compare models used by three non-governmental agencies with the government model; (b) to run the Andersen explanatory model for pharmaceutical expenditure for 2 to 5 drug items used by these three development partners in order to see how it correlates with the present model in terms of quantification and forecasting the needs; (c) to repeat objectives (a) and (b) using the government model; (d) to draw a conclusion about the strength.

Keywords: inventory management, pharmaceutical forecasting and quantification, pharmaceutical stock-out, pharmaceutical supply chain management

Procedia PDF Downloads 211
387 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems

Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme

Abstract:

Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.

Keywords: motion capture, cameras, biomechanics, gait analysis

Procedia PDF Downloads 295
386 Post-Exercise Recovery Tracking Based on Electrocardiography-Derived Features

Authors: Pavel Bulai, Taras Pitlik, Tatsiana Kulahava, Timofei Lipski

Abstract:

The method of Electrocardiography (ECG) interpretation for post-exercise recovery tracking was developed. Metabolic indices (aerobic and anaerobic) were designed using ECG-derived features. This study reports the associations between aerobic and anaerobic indices and classical parameters of the person’s physiological state, including blood biochemistry, glycogen concentration and VO2max changes. During the study 9 participants, healthy, physically active medium trained men and women, which trained 2-4 times per week for at least 9 weeks, fulfilled (i) ECG monitoring using Apple Watch Series 4 (AWS4); (ii) blood biochemical analysis; (iii) maximal oxygen consumption (VO2max) test, (iv) bioimpedance analysis (BIA). ECG signals from a single-lead wrist-wearable device were processed with detection of QRS-complex. Aerobic index (AI) was derived as the normalized slope of QR segment. Anaerobic index (ANI) was derived as the normalized slope of SJ segment. Biochemical parameters, glycogen content and VO2max were evaluated eight times within 3-60 hours after training. ECGs were recorded 5 times per day, plus before and after training, cycloergometry and BIA. The negative correlation between AI and blood markers of the muscles functional status including creatine phosphokinase (r=-0.238, p < 0.008), aspartate aminotransferase (r=-0.249, p < 0.004) and uric acid (r = -0.293, p<0.004) were observed. ANI was also correlated with creatine phosphokinase (r= -0.265, p < 0.003), aspartate aminotransferase (r = -0.292, p < 0.001), lactate dehydrogenase (LDH) (r = -0.190, p < 0.050). So, when the level of muscular enzymes increases during post-exercise fatigue, AI and ANI decrease. During recovery, the level of metabolites is restored, and metabolic indices rising is registered. It can be concluded that AI and ANI adequately reflect the physiology of the muscles during recovery. One of the markers of an athlete’s physiological state is the ratio between testosterone and cortisol (TCR). TCR provides a relative indication of anabolic-catabolic balance and is considered to be more sensitive to training stress than measuring testosterone and cortisol separately. AI shows a strong negative correlation with TCR (r=-0.437, p < 0.001) and correctly represents post-exercise physiology. In order to reveal the relation between the ECG-derived metabolic indices and the state of the cardiorespiratory system, direct measurements of VO2max were carried out at various time points after training sessions. The negative correlation between AI and VO2max (r = -0.342, p < 0.001) was obtained. These data testifying VO2max rising during fatigue are controversial. However, some studies have revealed increased stroke volume after training, that agrees with findings. It is important to note that post-exercise increase in VO2max does not mean an athlete’s readiness for the next training session, because the recovery of the cardiovascular system occurs over a substantially longer period. Negative correlations registered for ANI with glycogen (r = -0.303, p < 0.001), albumin (r = -0.205, p < 0.021) and creatinine (r = -0.268, p < 0.002) reflect the dehydration status of participants after training. Correlations between designed metabolic indices and physiological parameters revealed in this study can be considered as the sufficient evidence to use these indices for assessing the state of person’s aerobic and anaerobic metabolic systems after training during fatigue, recovery and supercompensation.

Keywords: aerobic index, anaerobic index, electrocardiography, supercompensation

Procedia PDF Downloads 99
385 The Implantable MEMS Blood Pressure Sensor Model With Wireless Powering And Data Transmission

Authors: Vitaliy Petrov, Natalia Shusharina, Vitaliy Kasymov, Maksim Patrushev, Evgeny Bogdanov

Abstract:

The leading worldwide death reasons are ischemic heart disease and other cardiovascular illnesses. Generally, the common symptom is high blood pressure. Long-time blood pressure control is very important for the prophylaxis, correct diagnosis and timely therapy. Non-invasive methods which are based on Korotkoff sounds are impossible to apply often and for a long time. Implantable devices can combine longtime monitoring with high accuracy of measurements. The main purpose of this work is to create a real-time monitoring system for decreasing the death rate from cardiovascular diseases. These days implantable electronic devices began to play an important role in medicine. Usually implantable devices consist of a transmitter, powering which could be wireless with a special made battery and measurement circuit. Common problems in making implantable devices are short lifetime of the battery, big size and biocompatibility. In these work, blood pressure measure will be the focus because it’s one of the main symptoms of cardiovascular diseases. Our device will consist of three parts: the implantable pressure sensor, external transmitter and automated workstation in a hospital. The Implantable part of pressure sensors could be based on piezoresistive or capacitive technologies. Both sensors have some advantages and some limitations. The Developed circuit is based on a small capacitive sensor which is made of the technology of microelectromechanical systems (MEMS). The Capacitive sensor can provide high sensitivity, low power consumption and minimum hysteresis compared to the piezoresistive sensor. For this device, it was selected the oscillator-based circuit where frequency depends from the capacitance of sensor hence from capacitance one can calculate pressure. The external device (transmitter) used for wireless charging and signal transmission. Some implant devices for these applications are passive, the external device sends radio wave signal on internal LC circuit device. The external device gets reflected the signal from the implant and from a change of frequency is possible to calculate changing of capacitance and then blood pressure. However, this method has some disadvantages, such as the patient position dependence and static using. Developed implantable device doesn’t have these disadvantages and sends blood pressure data to the external part in real-time. The external device continuously sends information about blood pressure to hospital cloud service for analysis by a physician. Doctor’s automated workstation at the hospital also acts as a dashboard, which displays actual medical data of patients (which require attention) and stores it in cloud service. Usually, critical heart conditions occur few hours before heart attack but the device is able to send an alarm signal to the hospital for an early action of medical service. The system was tested with wireless charging and data transmission. These results can be used for ASIC design for MEMS pressure sensor.

Keywords: MEMS sensor, RF power, wireless data, oscillator-based circuit

Procedia PDF Downloads 570
384 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas

Authors: Michel Soto Chalhoub

Abstract:

Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.

Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra

Procedia PDF Downloads 208
383 Using the Theory of Reasoned Action and Parental Mediation Theory to Examine Cyberbullying Perpetration among Children and Adolescents

Authors: Shirley S. Ho

Abstract:

The advancement and development of social media have inadvertently brought about a new form of bullying – cyberbullying – that transcends across physical boundaries of space. Although extensive research has been conducted in the field of cyberbullying, most of these studies have taken an overwhelmingly empirical angle. Theories guiding cyberbullying research are few. Furthermore, very few studies have explored the association between parental mediation and cyberbullying, with majority of existing studies focusing on cyberbullying victimization rather than perpetration. Therefore, this present study investigates cyberbullying perpetration from a theoretical angle, with a focus on the Theory of Reasoned Action and the Parental Mediation Theory. More specifically, this study examines the direct effects of attitude, subjective norms, descriptive norms, injunctive norms and active mediation and restrictive mediation on cyberbullying perpetration on social media among children and adolescents in Singapore. Furthermore, the moderating role of age on the relationship between parental mediation and cyberbullying perpetration on social media are examined. A self-administered paper-and-pencil nationally-representative survey was conducted. Multi-stage cluster random sampling was used to ensure that schools from all the four (North, South, East, and West) regions of Singapore were equally represented in the sample used for the survey. In all 607 upper primary school children (i.e., Primary 4 to 6 students) and 782 secondary school adolescents participated in our survey. The total average response rates were 69.6% for student participation. An ordinary least squares hierarchical regression analysis was conducted to test the hypotheses and research questions. The results revealed that attitude and subjective norms were positively associated with cyberbullying perpetration on social media. Descriptive norms and injunctive norms were not found to be significantly associated with cyberbullying perpetration. The results also showed that both parental mediation strategies were negatively associated with cyberbullying perpetration on social media. Age was a significant moderator of both parental mediation strategies and cyberbullying perpetration. The negative relationship between active mediation and cyberbullying perpetration was found to be greater in the case of children than adolescents. Children who received high restrictive parental mediation were less likely to perform cyberbullying behaviors, while adolescents who received high restrictive parental mediation were more likely to be engaged in cyberbullying perpetration. The study reveals that parents should apply active mediation and restrictive mediation in different ways for children and adolescents when trying to prevent cyberbullying perpetration. The effectiveness of active parental mediation for reducing cyberbullying perpetration was more in the case of children than for adolescents. Younger children were found to be more likely to respond more positively toward restrictive parental mediation strategies, but in the case of adolescents, overly restrictive control was found to increase cyberbullying perpetration. Adolescents exhibited less cyberbullying behaviors when under low restrictive strategies. Findings highlight that the Theory of Reasoned Action and Parental Mediation Theory are promising frameworks to apply in the examination of cyberbullying perpetration. The findings that different parental mediation strategies had differing effectiveness, based on the children’s age, bring about several practical implications that may benefit educators and parents when addressing their children’s online risk.

Keywords: cyberbullying perpetration, theory of reasoned action, parental mediation, social media, Singapore

Procedia PDF Downloads 237
382 Spray Nebulisation Drying: Alternative Method to Produce Microparticulated Proteins

Authors: Josef Drahorad, Milos Beran, Ondrej Vltavsky, Marian Urban, Martin Fronek, Jiri Sova

Abstract:

Engineering efforts of researchers of the Food research institute Prague and the Czech Technical University in spray drying technologies led to the introduction of a demonstrator ATOMIZER and a new technology of Carbon Dioxide-Assisted Spray Nebulization Drying (CASND). The equipment combines the spray drying technology, when the liquid to be dried is atomized by a rotary atomizer, with Carbon Dioxide Assisted Nebulization - Bubble Dryer (CAN-BD) process in an original way. A solution, emulsion or suspension is saturated by carbon dioxide at pressure up to 80 bar before the drying process. The atomization process takes place in two steps. In the first step, primary droplets are produced at the outlet of the rotary atomizer of special construction. In the second step, the primary droplets are divided in secondary droplets by the CO2 expansion from the inside of primary droplets. The secondary droplets, usually in the form of microbubbles, are rapidly dried by warm air stream at temperatures up to 60ºC and solid particles are formed in a drying chamber. Powder particles are separated from the drying air stream in a high efficiency fine powder separator. The product is frequently in the form of submicron hollow spheres. The CASND technology has been used to produce microparticulated protein concentrates for human nutrition from alternative plant sources - hemp and canola seed filtration cakes. Alkali extraction was used to extract the proteins from the filtration cakes. The protein solutions after the alkali extractions were dried with the demonstrator ATOMIZER. Aerosol particle size distribution and concentration in the draying chamber were determined by two different on-line aerosol spectrometers SMPS (Scanning Mobility Particle Sizer) and APS (Aerodynamic Particle Sizer). The protein powders were in form of hollow spheres with average particle diameter about 600 nm. The particles were characterized by the SEM method. The functional properties of the microparticulated protein concentrates were compared with the same protein concentrates dried by the conventional spray drying process. Microparticulated protein has been proven to have improved foaming and emulsifying properties, water and oil absorption capacities and formed long-term stable water dispersions. This work was supported by the research grants TH03010019 of the Technology Agency of the Czech Republic.

Keywords: carbon dioxide-assisted spray nebulization drying, canola seed, hemp seed, microparticulated proteins

Procedia PDF Downloads 148
381 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)

Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg

Abstract:

One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.

Keywords: arsenic, fluoride, groundwater contamination, logistic regression

Procedia PDF Downloads 323
380 Thai Cane Farmers' Responses to Sugar Policy Reforms: An Intentions Survey

Authors: Savita Tangwongkit, Chittur S Srinivasan, Philip J. Jones

Abstract:

Thailand has become the world’s fourth largest sugarcane producer and second largest sugar exporter. While there have been a number of drivers of this growth, the primary driver has been wide-ranging government support measures. Recently, the Thai government has emphasized the need for policy reform as part of a broader industry restructuring to bring the sector up-to-date with the current and future developments in the international sugar market. Because of the sectors historical dependence on government support, any such reform is likely to have a very significant impact on the fortunes of Thai cane farmers. This study explores the impact of three policy scenarios, representing a spectrum of policy approaches, on Thai cane producers. These reform scenarios were designed in consultation with policy makers and academics working in the cane sector. Scenario 1 captures the current ‘government proposal’ for policy reform. This scenario removes certain domestic production subsidies but seeks to maintain as much support as is permissible under current WTO rules. The second scenario, ‘protectionism’, maintains the current internal market producer supports, but otherwise complies with international (WTO) commitments. Third, the ‘libertarian scenario’ removes all production support and market interventions, trade and domestic consumption distortions. Most important driver of producer behaviour in all of the scenarios is the producer price of cane. Cane price is obviously highest under the protectionism scenario, followed by government proposal and libertarian scenarios, respectively. Likely producer responses to these three policy scenarios was determined by means of a large-scale survey of cane farmers. The sample was stratified by size group and quotas filled by size group and region. One scenario was presented to each of three sub-samples, consisting of approx.150 farmers. Total sample size was 462 farms. Data was collected by face-to-face interview between June and August 2019. There was a marked difference in farmer response to the three scenarios. Farmers in the ‘Protectionism’ scenario, which maintains the highest cane price and those who farm larger cane areas are more likely to continue cane farming. The libertarian scenario is likely to result in the greatest losses in terms of cane production volume broadly double that of the ‘protectionism’ scenario, primarily due to farmers quitting cane production altogether. Over half of loss cane production volume comes from medium-size farm, i.e. the largest and smallest producers are the most resilient. This result is likely due to the fact that the medium size group are large enough to require hired labour but lack the economies of scale of the largest farms. Over all size groups the farms most heavily specialized in cane production, i.e. those devoting 26-50% of arable land to cane, are also the most vulnerable, with 70% of all farmers quitting cane production coming from this group. This investigation suggests that cane price is the most significant determinant of farmer behaviour. Also, that where scenarios drive significantly lower cane price, policy makers should target support towards mid-sized producers, with policies that encourage efficiency gains and diversification into alternative agricultural crops.

Keywords: farmer intentions, farm survey, policy reform, Thai cane production

Procedia PDF Downloads 94
379 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 74
378 Understanding Feminization of Indian Agriculture and the Dynamics of Intrahousehold Bargaining Power at a Household Level

Authors: Arpit Sachan, Nilanshu Kumar

Abstract:

This paper tries to understand the nuances of feminisation of agriculture in the Indian context and how that is associated with better intrahousehold bargaining power for women. The economic survey of India indicates a constant increase in the share of the female workforce in Indian agriculture in the past few decades. This can be accounted for by many factors like the migration of male workers to urban areas and, therefore, the complete burden of agriculture shifting on the female counterparts. Therefore this study is an attempt to study that how this increase in the female workforce corresponds to a better decision-making ability for women in rural farm households. This paper is an attempt to carefully evaluate this aspect of the feminisation of Indian agriculture. The paper tries to study how various factors that improve the status of women in agriculture change with things like resource ownership. This paper uses both the macro-level and micro-level data to study the dynamics of the proportion of the workforce in agriculture across different states in India and how that has translated into better indicators for women in rural areas. The fall in India’s rank in the global gender wage gap index is alarming in such a context, and this creates a puzzle with increasing female workforce participation. The paper will consider if the condition of women improved over time with the increased share of employment or not? Using field survey data, this paper tries to understand if there exists any digression for some of the indicators both at the macro and micro level. The paper also tries to integrate the economic understanding of gender aspects of the workforce and the sociological stance prevailing in the existing literature. Therefore, this paper takes a mixed-method approach to better understand the role that social structure plays in the improved status of women within and across various households. Therefore, this paper will finally help us understanding if at all there is a feminisation of Indian agriculture or it's just exploitation of a different kind. This study intends to create a distinction between the gendered labour force in Indian agriculture and the complete democratization of Indian agriculture. The study is primarily focused on areas where the exodus of male migrants pushes women to work on agricultural farms. The question posits is whether it is the willingness of women to work in agriculture or is it urbanisation and development-induced conditions that make women work in agriculture as farm labourers? The motive is to understand if factors like resource ownership and the ability to autonomous decision-making are interlinked with an increased proportion of the female workforce or not? Based on this framework, we finally provide a brief comment on policy implications of government intervention in improving Indian agriculture and the gender aspects associated with it.

Keywords: feminisation, intrahousehold bargaining, farm households, migration, agriculture, decision-making

Procedia PDF Downloads 117
377 Roadway Infrastructure and Bus Safety

Authors: Richard J. Hanowski, Rebecca L. Hammond

Abstract:

Very few studies have been conducted to investigate safety issues associated with motorcoach/bus operations. The current study investigates the impact that roadway infrastructure, including locality, roadway grade, traffic flow and traffic density, have on bus safety. A naturalistic driving study was conducted in the U.S.A that involved 43 motorcoaches. Two fleets participated in the study and over 600,000 miles of naturalistic driving data were collected. Sixty-five bus drivers participated in this study; 48 male and 17 female. The average age of the drivers was 49 years. A sophisticated data acquisition system (DAS) was installed on each of the 43 motorcoaches and a variety of kinematic and video data were continuously recorded. The data were analyzed by identifying safety critical events (SCEs), which included crashes, near-crashes, crash-relevant conflicts, and unintentional lane deviations. Additionally, baseline (normative driving) segments were also identified and analyzed for comparison to the SCEs. This presentation highlights the need for bus safety research and the methods used in this data collection effort. With respect to elements of roadway infrastructure, this study highlights the methods used to assess locality, roadway grade, traffic flow, and traffic density. Locality was determined by manual review of the recorded video for each event and baseline and was characterized in terms of open country, residential, business/industrial, church, playground, school, urban, airport, interstate, and other. Roadway grade was similarly determined through video review and characterized in terms of level, grade up, grade down, hillcrest, and dip. The video was also used to make a determination of the traffic flow and traffic density at the time of the event or baseline segment. For traffic flow, video was used to assess which of the following best characterized the event or baseline: not divided (2-way traffic), not divided (center 2-way left turn lane), divided (median or barrier), one-way traffic, or no lanes. In terms of traffic density, level-of-service categories were used: A1, A2, B, C, D, E, and F. Highlighted in this abstract are only a few of the many roadway elements that were coded in this study. Other elements included lighting levels, weather conditions, roadway surface conditions, relation to junction, and roadway alignment. Note that a key component of this study was to assess the impact that driver distraction and fatigue have on bus operations. In this regard, once the roadway elements had been coded, the primary research questions that were addressed were (i) “What environmental condition are associated with driver choice of engagement in tasks?”, and (ii) “what are the odds of being in a SCE while engaging in tasks while encountering these conditions?”. The study may be of interest to researchers and traffic engineers that are interested in the relationship between roadway infrastructure elements and safety events in motorcoach bus operations.

Keywords: bus safety, motorcoach, naturalistic driving, roadway infrastructure

Procedia PDF Downloads 166
376 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study

Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji

Abstract:

Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.

Keywords: phase change materials, air-PCM exchangers, convection, conduction

Procedia PDF Downloads 160
375 From Preoccupied Attachment Pattern to Depression: Serial Mediation Model on the Female Sample

Authors: Tatjana Stefanovic Stanojevic, Milica Tosic Radev, Aleksandra Bogdanovic

Abstract:

Depression is considered to be a leading cause of death and disability in the female population, and that is the reason why understanding the dynamics of the onset of depressive symptomatology is important. A review of the literature indicates the relationship between depressive symptoms and insecure attachment patterns, but very few studies have examined the mechanism underlying this relation. The aim of the study was to examine the pathway from the preoccupied attachment pattern to depressive symptomatology, as well as to test the mediation effect of mentalization, social anxiety and rumination in this relationship using a serial mediation model. The research was carried out on a geographical cluster sample from the general population of Serbia included within the project ‘Indicators and models of family and work roles harmonization’ funded by the Ministry of Education, Science and Technological Development of the Republic of Serbia. This research was carried out on a subsample of 791 working-age female adults from 37 urban and rural locations distributed through 20 administrative districts of Serbia. The respondents filled in a battery of instruments, including Relationship Questionnaire - Clinical Version (RQ - CV), The Mentalization Scale (MentS), Scale of Social Anxiety (SA), Patient Ruminative Thought Style Questionnaire (RTSQ), Health Questionnaire (PHQ-9). The results confirm our assumption that the total indirect effect of the preoccupied attachment pattern to depressive symptoms is significant across all mediators separately. More importantly, this effect is still present in a model with a sequential mediator relationship, where social anxiety, rumination, and mentalization were perceived as serial mediators of a relationship between preoccupied attachment and depressive symptoms (estimated indirect effect=0.004, boot-strapped 95% CI=0.002 to 0.007). Our findings suggest that there is a significant specific indirect effect of the preoccupied attachment pattern to depressive symptoms, occurring through mentalization, social anxiety and rumination, indicating that preoccupied attachment cause decrease of a self related mentalization, which in turn causes increasing of social anxiety and rumination, concluding in depressive symptoms as a final consequence. The finding that the path from the preoccupied attachment pattern to depressive symptoms is typical in women is understandable from the perspective of both evolutionary and culturally conditioned gender differences. The practical implications of the study are reflected in the recommendations for the prevention and forehand psychotherapy response among preoccupied women with depressive symptomatology. Treatment of this specific group of depressed patients should be focused on strengthening mentalization, learning to accept and to understand herself better, reducing anxiety in situations where mistakes are visible to others, and replacing the rumination strategy with more constructive coping strategies.

Keywords: preoccupied attachment, depression, serial mediation model, mentalization, rumination

Procedia PDF Downloads 120
374 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing

Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev

Abstract:

The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.

Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect

Procedia PDF Downloads 108
373 Using Low-Calorie Gas to Generate Heat and Electricity

Authors: Аndrey Marchenko, Oleg Linkov, Alexander Osetrov, Sergiy Kravchenko

Abstract:

The low-calorie of gases include biogas, coal gas, coke oven gas, associated petroleum gas, gases sewage, etc. These gases are usually released into the atmosphere or burned on flares, causing substantial damage to the environment. However, with the right approach, low-calorie gas fuel can become a valuable source of energy. Specified determines the relevance of areas related to the development of low-calorific gas utilization technologies. As an example, in the work considered one of way of utilization of coalmine gas, because Ukraine ranks fourth in the world in terms of coal mine gas emission (4.7% of total global emissions, or 1.2 billion m³ per year). Experts estimate that coal mine gas is actively released in the 70-80 percent of existing mines in Ukraine. The main component of coal mine gas is methane (25-60%) Methane in 21 times has a greater impact on the greenhouse effect than carbon dioxide disposal problem has become increasingly important in the context of the increasing need to address the problems of climate, ecology and environmental protection. So marked causes negative effect of both local and global nature. The efforts of the United Nations and the World Bank led to the adoption of the program 'Zero Routine Flaring by 2030' dedicated to the cessation of these gases burn in flares and disposing them with the ability to generate heat and electricity. This study proposes to use coal gas as a fuel for gas engines to generate heat and electricity. Analyzed the physical-chemical properties of low-calorie gas fuels were allowed to choose a suitable engine, as well as estimate the influence of the composition of the fuel at its techno-economic indicators. Most suitable for low-calorie gas is engine with pre-combustion chamber jet ignition. In Ukraine is accumulated extensive experience in exploitation and production of gas engines with capacity of 1100 kW type GD100 (10GDN 207/2 * 254) fueled by natural gas. By using system pre- combustion chamber jet ignition and quality control in the engines type GD100 introduces the concept of burning depleted burn fuel mixtures, which in turn leads to decrease in the concentration of harmful substances of exhaust gases. The main problems of coal mine gas as a fuel for ICE is low calorific value, the presence of components that adversely affect combustion processes and terms of operation of the ICE, the instability of the composition, weak ignition. In some cases, these problems can be solved by adaptation engine design using coal mine gas as fuel (changing compression ratio, fuel injection quantity increases, change ignition time, increase energy plugs, etc.). It is shown that the use of coal mine gas engines with prechamber has not led to significant changes in the indicator parameters (ηi = 0.43 - 0.45). However, this significantly increases the volumetric fuel consumption, which requires increased fuel injection quantity to ensure constant nominal engine power. Thus, the utilization of low-calorie gas fuels in stationary gas engine type-based GD100 will significantly reduce emissions of harmful substances into the atmosphere when the generate cheap electricity and heat.

Keywords: gas engine, low-calorie gas, methane, pre-combustion chamber, utilization

Procedia PDF Downloads 247
372 Neighborhood Sustainability Assessment Tools: A Conceptual Framework for Their Use in Building Adaptive Capacity to Climate Change

Authors: Sally Naji, Julie Gwilliam

Abstract:

Climate change remains a challenging matter for the human and the built environment in the 21st century, where the need to consider adaptation to climate change in the development process is paramount. However, there remains a lack of information regarding how we should prepare responses to this issue, such as through developing organized and sophisticated tools enabling the adaptation process. This study aims to build a systematic framework approach to investigate the potentials that Neighborhood Sustainability Assessment tools (NSA) might offer in enabling both the analysis of the emerging adaptive capacity to climate change. The analysis of the framework presented in this paper aims to discuss this issue in three main phases. The first part attempts to link sustainability and climate change, in the context of adaptive capacity. It is argued that in deciding to promote sustainability in the context of climate change, both the resilience and vulnerability processes become central. However, there is still a gap in the current literature regarding how the sustainable development process can respond to climate change. As well as how the resilience of practical strategies might be evaluated. It is suggested that the integration of the sustainability assessment processes with both the resilience thinking process, and vulnerability might provide important components for addressing the adaptive capacity to climate change. A critical review of existing literature is presented illustrating the current lack of work in this field, integrating these three concepts in the context of addressing the adaptive capacity to climate change. The second part aims to identify the most appropriate scale at which to address the built environment for the climate change adaptation. It is suggested that the neighborhood scale can be considered as more suitable than either the building or urban scales. It then presents the example of NSAs, and discusses the need to explore their potential role in promoting the adaptive capacity to climate change. The third part of the framework presents a comparison among three example NSAs, BREEAM Communities, LEED-ND, and CASBEE-UD. These three tools have been selected as the most developed and comprehensive assessment tools that are currently available for the neighborhood scale. This study concludes that NSAs are likely to present the basis for an organized framework to address the practical process for analyzing and yet promoting Adaptive Capacity to Climate Change. It is further argued that vulnerability (exposure & sensitivity) and resilience (Interdependence & Recovery) form essential aspects to be addressed in the future assessment of NSA’s capability to adapt to both short and long term climate change impacts. Finally, it is acknowledged that further work is now required to understand impact assessment in terms of the range of physical sectors (Water, Energy, Transportation, Building, Land Use and Ecosystems), Actor and stakeholder engagement as well as a detailed evaluation of the NSA indicators, together with a barriers diagnosis process.

Keywords: adaptive capacity, climate change, NSA tools, resilience, sustainability

Procedia PDF Downloads 363
371 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake

Authors: Zhang Xin

Abstract:

Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.

Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS

Procedia PDF Downloads 104
370 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 136
369 An Exploratory Study of Changing Organisational Practices of Third-Sector Organisations in Mandated Corporate Social Responsibility in India

Authors: Avadh Bihari

Abstract:

Corporate social responsibility (CSR) has become a global parameter to define corporates' ethical and responsible behaviour. It was a voluntary practice in India till 2013, driven by various guidelines, which has become a mandate since 2014 under the Companies Act, 2013. This has compelled the corporates to redesign their CSR strategies by bringing in structures, planning, accountability, and transparency in their processes with a mandate to 'comply or explain'. Based on the author's M.Phil. dissertation, this paper presents the changes in organisational practices and institutional mechanisms of third-sector organisations (TSOs) with the theoretical frameworks of institutionalism and co-optation. It became an interesting case as India is the only country to have a law on CSR, which is not only mandating the reporting but the spending too. The space of CSR in India is changing rapidly and affecting multiple institutions, in the context of the changing roles of the state, market, and TSOs. Several factors such as stringent regulation on foreign funding, mandatory CSR pushing corporates to look out for NGOs, and dependency of Indian NGOs on CSR funds have come to the fore almost simultaneously, which made it an important area of study. Further, the paper aims at addressing the gap in the literature on the effects of mandated CSR on the functioning of TSOs through the empirical and theoretical findings of this study. The author had adopted an interpretivist position in this study to explore changes in organisational practices from the participants' experiences. Data were collected through in-depth interviews with five corporate officials, eleven officials from six TSOs, and two academicians, located at Mumbai and Delhi, India. The findings of this study show the legislation has institutionalised CSR, and TSOs get co-opted in the process of implementing mandated CSR. Seventy percent of the corporates implement their CSR projects through TSOs in India; this has affected the organisational practices of TSOs to a large extent. They are compelled to recruit expert workforce, create new departments for monitoring & evaluation, communications, and adopt management practices of project implementation from corporates. These are attempts to institutionalise the TSOs so that they can produce calculated results as demanded by corporates. In this process, TSOs get co-opted in a struggle to secure funds and lose their autonomy. The normative, coercive, and mimetic isomorphisms of institutionalism come into play as corporates are mandated to take up CSR, thereby influencing the organisational practices of TSOs. These results suggest that corporates and TSOs require an understanding of each other's work culture to develop mutual respect and work towards the goal of sustainable development of the communities. Further, TSOs need to retain their autonomy and understanding of ground realities without which they become an extension of the corporate-funder. For a successful CSR project, engagement beyond funding is required from corporate, through their involvement and not interference. CSR-led community development can be structured by management practices to an extent, but cannot overshadow the knowledge and experience of TSOs.

Keywords: corporate social responsibility, institutionalism, organisational practices, third-sector organisations

Procedia PDF Downloads 97
368 Assessment of Ecosystem Readiness for Adoption of Circularity: A Multi-Case Study Analysis of Textile Supply Chain in Pakistan

Authors: Azhar Naila, Steuer Benjamin

Abstract:

Over-exploitation of resources and the burden on natural systems have provoked worldwide concerns about the potential resource as well as supply risks in the future. It has been estimated that the consumption of materials and resources will double by 2060, substantially mounting the amount of waste and emissions produced by individuals, organizations, and businesses, which necessitates sustainable technological innovations to address the problem. Therefore, there is a need to design products and services purposefully for material resource efficiency. This directs us toward the conceptualization and implementation of the ‘Circular Economy (CE),’ which has gained considerable attention among policymakers, researchers, and businesses in the past decade. A large amount of literature focuses on the concept of CE. However, contextual empirical research on the need to embrace CE in an emerging economy like Pakistan is still scarce, where the traditional economic model of take-make-dispose is quite common. Textile exports account for approximately 61% of Pakistan's total exports, and the industry provides employment for about 40% of the country's total industrial workforce. The industry provides job opportunities to above 10 million farmers, with cotton as the main crop of Pakistan. Consumers, companies, as well as the government have explored very limited CE potential in the country. This gap has motivated us to carry out the present study. The study is based on a mixed method approach, for which key informant interviews have been conducted to get insight into the present situation of the ecosystem readiness for the adoption of CE in 20 textile manufacturing industries. The subject study has been conducted on the following areas i) the level of understanding of the CE concept among key stakeholders in the textile manufacturing industry ii) Companies are pushing boundaries to invest in circularity-based initiatives, exploring the depths of risk-taking iii) the current national policy framework support the adoption of CE. Qualitative assessment has been undertaken using MAXQDA to analyze the data received after the key informant interviews. The data has been transcribed and coded for further analysis. The results show that most of the key stakeholders have a clear understanding of the concept, whereas few consider it to be only relevant to the end-of-life treatment of waste generated from the industry. Non-governmental organizations have been observed to be key players in creating awareness among the manufacturing industries. Maximum companies have shown their consent to invest in initiatives related to the adoption of CE. Whereas a few consider themselves far behind the race due to a lack of financial resources and support from responsible institutions. Mostly, the industries have an ambitious vision for integrating CE into the company’s policy but seem not to be ready to take any significant steps to nurture a culture for experimentation. However, the government is not playing any vital role in the transition towards CE; rather, they have been busy with the state’s uncertain political situation. Presently, Pakistan does not have any policy framework that supports the transition towards CE. Acknowledging the present landscape a well-informed CE transition is immediately required.

Keywords: circular economy, textile supply chain, textile manufacturing industries, resource efficiency, ecosystem readiness, multi-case study analysis

Procedia PDF Downloads 32
367 Operational Characteristics of the Road Surface Improvement

Authors: Iuri Salukvadze

Abstract:

Construction takes importance role in the history of mankind, there is not a single thing-product in our lives in which the builder’s work was not to be materialized, because to create all of it requires setting up factories, roads, and bridges, etc. The function of the Republic of Georgia, as part of the connecting Europe-Asia transport corridor, is significantly increased. In the context of transit function a large part of the cargo traffic belongs to motor transport, hence the improvement of motor roads transport infrastructure is rather important and rise the new, increased operational demands for existing as well as new motor roads. Construction of the durable road surface is related to rather large values, but because of high transport-operational properties, such as high-speed, less fuel consumption, less depreciation of tires, etc. If the traffic intensity is high, therefore the reimbursement of expenses occurs rapidly and accordingly is increasing income. If the traffic intensity is relatively small, it is recommended to use lightened structures of road carpet in order to pay for capital investments amounted to no more than normative one. The road carpet is divided into the following basic types: asphaltic concrete and cement concrete. Asphaltic concrete is the most perfect type of road carpet. It is arranged in two or three layers on rigid foundation and will be compacted. Asphaltic concrete is artificial building material, which due stratum will be selected and measured from stone skeleton and sand, interconnected by bitumen and a mixture of mineral powder. Less strictly selected similar material is called as bitumen-mineral mixture. Asphaltic concrete is non-rigid building material and well durable on vertical loadings; it is less resistant to the impact of horizontal forces. The cement concrete is monolithic and durable material, it is well durable the horizontal loads and is less resistant related to vertical loads. The cement concrete consists from strictly selected, measured stone material and sand, the binder is cement. The cement concrete road carpet represents separate slabs of sizes from 3 ÷ 5 op to 6 ÷ 8 meters. The slabs are reinforced by a rather complex system. Between the slabs are arranged seams that are designed for avoiding of additional stresses due temperature fluctuations on the length of slabs. For the joint behavior of separate slabs, they are connected by metal rods. Rods provide the changes in the length of slabs and distribute to the slab vertical forces and bending moments. The foundation layers will be extremely durable, for that is required high-quality stone material, cement, and metal. The qualification work aims to: in order for improvement of traffic conditions on motor roads to prolong operational conditions and improving their characteristics. The work consists from three chapters, 80 pages, 5 tables and 5 figures. In the work are stated general concepts as well as carried out by various companies using modern methods tests and their results. In the chapter III are stated carried by us tests related to this issue and specific examples to improving the operational characteristics.

Keywords: asphalt, cement, cylindrikal sample of asphalt, building

Procedia PDF Downloads 203
366 Driver of Migration and Appropriate Policy Concern Considering the Southwest Coastal Part of Bangladesh

Authors: Aminul Haque, Quazi Zahangir Hossain, Dilshad Sharmin Chowdhury

Abstract:

The human migration is getting growing concern around the world, and recurrent disasters and climate change impact have great influence on migration. Bangladesh is one of the disaster prone countries that/and has greater susceptibility to stress migration by recurrent disasters and climate change. The study was conducted to investigate the factors that have a strong influence on current migration and changing pattern of life and livelihood means of the southwest coastal part of Bangladesh. Moreover, the study also revealed a strong relationship between disasters and migration and appropriate policy concern. To explore this relation, both qualitative and quantitative methods were applied to a questionnaire survey at household level and simple random sampling technique used in the sampling process along with different secondary data sources for understanding policy concern and practices. The study explores the most influential driver of migration and its relationship with social, economic and environmental drivers. The study denotes that, the environmental driver has a greater effect on the intention of permanent migration (t=1.481, p-value=0.000) at the 1 percent significance level. The significant number of respondents denotes that abrupt pattern of cyclone, flood, salinity intrusion and rainfall are the most significant environmental driver to make a decision on permanent migration. The study also found that the temporary migration pattern has 2-fold increased compared to last ten (10) years. It also appears from the study that environmental factors have a great implication on the changing pattern of the occupation of the study area and it has reported that about 76% of the respondent now in the changing modality of livelihood compare to their traditional practices. The study bares that the migration has foremost impact on children and women by increasing hardship and creating critical social security. The exposure-route of permanent migration is not smooth indeed, these migrations creating urban and conflict in Chittagong hill tracks of Bangladesh. The study denotes that there is not any safeguard of the stress migrant on existing policy and not have any measures for safe migration and resettlement rather considering the emergency response and shelter. The majority of (98%) people believes that migration is not to be the adoption strategies, but contrary to this young group of respondent believes that safe migration could be the adaptation strategy which could bring a positive result compare to the other resilience strategies. On the other hand, the significant number of respondents uttered that appropriate policy measure could be an adaptation strategy for being the formation of a resilient community and reduce the migration by meaningful livelihood options with appropriate protection measure.

Keywords: environmental driver, livelihood, migration, resilience

Procedia PDF Downloads 247
365 System-Driven Design Process for Integrated Multifunctional Movable Concepts

Authors: Oliver Bertram, Leonel Akoto Chama

Abstract:

In today's civil transport aircraft, the design of flight control systems is based on the experience gained from previous aircraft configurations with a clear distinction between primary and secondary flight control functions for controlling the aircraft altitude and trajectory. Significant system improvements are now seen particularly in multifunctional moveable concepts where the flight control functions are no longer considered separate but integral. This allows new functions to be implemented in order to improve the overall aircraft performance. However, the classical design process of flight controls is sequential and insufficiently interdisciplinary. In particular, the systems discipline is involved only rudimentarily in the early phase. In many cases, the task of systems design is limited to meeting the requirements of the upstream disciplines, which may lead to integration problems later. For this reason, approaching design with an incremental development is required to reduce the risk of a complete redesign. Although the potential and the path to multifunctional moveable concepts are shown, the complete re-engineering of aircraft concepts with less classic moveable concepts is associated with a considerable risk for the design due to the lack of design methods. This represents an obstacle to major leaps in technology. This gap in state of the art is even further increased if, in the future, unconventional aircraft configurations shall be considered, where no reference data or architectures are available. This means that the use of the above-mentioned experience-based approach used for conventional configurations is limited and not applicable to the next generation of aircraft. In particular, there is a need for methods and tools for a rapid trade-off between new multifunctional flight control systems architectures. To close this gap in the state of the art, an integrated system-driven design process for multifunctional flight control systems of non-classical aircraft configurations will be presented. The overall goal of the design process is to find optimal solutions for single or combined target criteria in a fast process from the very large solution space for the flight control system. In contrast to the state of the art, all disciplines are involved for a holistic design in an integrated rather than a sequential process. To emphasize the systems discipline, this paper focuses on the methodology for designing moveable actuation systems in the context of this integrated design process of multifunctional moveables. The methodology includes different approaches for creating system architectures, component design methods as well as the necessary process outputs to evaluate the systems. An application example of a reference configuration is used to demonstrate the process and validate the results. For this, new unconventional hydraulic and electrical flight control system architectures are calculated which result from the higher requirements for multifunctional moveable concept. In addition to typical key performance indicators such as mass and required power requirements, the results regarding the feasibility and wing integration aspects of the system components are examined and discussed here. This is intended to show how the systems design can influence and drive the wing and overall aircraft design.

Keywords: actuation systems, flight control surfaces, multi-functional movables, wing design process

Procedia PDF Downloads 126
364 Electro-Hydrodynamic Effects Due to Plasma Bullet Propagation

Authors: Panagiotis Svarnas, Polykarpos Papadopoulos

Abstract:

Atmospheric-pressure cold plasmas continue to gain increasing interest for various applications due to their unique properties, like cost-efficient production, high chemical reactivity, low gas temperature, adaptability, etc. Numerous designs have been proposed for these plasmas production in terms of electrode configuration, driving voltage waveform and working gas(es). However, in order to exploit most of the advantages of these systems, the majority of the designs are based on dielectric-barrier discharges (DBDs) either in filamentary or glow regimes. A special category of the DBD-based atmospheric-pressure cold plasmas refers to the so-called plasma jets, where a carrier noble gas is guided by the dielectric barrier (usually a hollow cylinder) and left to flow up to the atmospheric air where a complicated hydrodynamic interplay takes place. Although it is now well established that these plasmas are generated due to ionizing waves reminding in many ways streamer propagation, they exhibit discrete characteristics which are better mirrored on the terms 'guided streamers' or 'plasma bullets'. These 'bullets' travel with supersonic velocities both inside the dielectric barrier and the channel formed by the noble gas during its penetration into the air. The present work is devoted to the interpretation of the electro-hydrodynamic effects that take place downstream of the dielectric barrier opening, i.e., in the noble gas-air mixing area where plasma bullet propagate under the influence of local electric fields in regions of variable noble gas concentration. Herein, we focus on the role of the local space charge and the residual ionic charge left behind after the bullet propagation in the gas flow field modification. The study communicates both experimental and numerical results, coupled in a comprehensive manner. The plasma bullets are here produced by a custom device having a quartz tube as a dielectric barrier and two external ring-type electrodes driven by sinusoidal high voltage at 10 kHz. Helium gas is fed to the tube and schlieren photography is employed for mapping the flow field downstream of the tube orifice. Mixture mass conservation equation, momentum conservation equation, energy conservation equation in terms of temperature and helium transfer equation are simultaneously solved, leading to the physical mechanisms that govern the experimental results. Namely, we deal with electro-hydrodynamic effects mainly due to momentum transfer from atomic ions to neutrals. The atomic ions are left behind as residual charge after the bullet propagation and gain energy from the locally created electric field. The electro-hydrodynamic force is eventually evaluated.

Keywords: atmospheric-pressure plasmas, dielectric-barrier discharges, schlieren photography, electro-hydrodynamic force

Procedia PDF Downloads 127