Search results for: modified Bessel functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4741

Search results for: modified Bessel functions

541 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 152
540 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring

Procedia PDF Downloads 215
539 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure

Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano

Abstract:

Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.

Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security

Procedia PDF Downloads 148
538 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes

Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini

Abstract:

Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.

Keywords: covid-19, stroke, virtual reality, rehabilitation

Procedia PDF Downloads 128
537 Exploring the Potential of Bio-Inspired Lattice Structures for Dynamic Applications in Design

Authors: Axel Thallemer, Aleksandar Kostadinov, Abel Fam, Alex Teo

Abstract:

For centuries, the forming processes in nature served as a source of inspiration for both architects and designers. It seems as most human artifacts are based on ideas which stem from the observation of the biological world and its principles of growth. As a fact, in the cultural history of Homo faber, materials have been mostly used in their solid state: From hand axe to computer mouse, the principle of employing matter has not changed ever since the first creation. In the scope of history only recently and by the help of additive-generative fabrication processes through Computer Aided Design (CAD), designers were enabled to deconstruct solid artifacts into an outer skin and an internal lattice structure. The intention behind this approach is to create a new topology which reduces resources and integrates functions into an additively manufactured component. However, looking at the currently employed lattice structures, it is very clear that those lattice structure geometries have not been thoroughly designed, but rather taken out of basic-geometry libraries which are usually provided by the CAD. In the here presented study, a group of 20 industrial design students created new and unique lattice structures using natural paragons as their models. The selected natural models comprise both the animate and inanimate world, with examples ranging from the spiraling of narwhal tusks, off-shooting of mangrove roots, minimal surfaces of soap bubbles, up to the rhythmical arrangement of molecular geometry, like in the case of SiOC (Carbon-Rich Silicon Oxicarbide). This ideation process leads to a design of a geometric cell, which served as a basic module for the lattice structure, whereby the cell was created in visual analogy to its respective natural model. The spatial lattices were fabricated additively in mostly [X]3 by [Y]3 by [Z]3 units’ volumes using selective powder bed melting in polyamide with (z-axis) 50 mm and 100 µm resolution and subdued to mechanical testing of their elastic zone in a biomedical laboratory. The results demonstrate that additively manufactured lattice structures can acquire different properties when they are designed in analogy to natural models. Several of the lattices displayed the ability to store and return kinetic energy, while others revealed a structural failure which can be exploited for purposes where a controlled collapse of a structure is required. This discovery allows for various new applications of functional lattice structures within industrially created objects.

Keywords: bio-inspired, biomimetic, lattice structures, additive manufacturing

Procedia PDF Downloads 133
536 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia

Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez

Abstract:

Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.

Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener

Procedia PDF Downloads 352
535 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 193
534 Modelling the Behavior of Commercial and Test Textiles against Laundering Process by Statistical Assessment of Their Performance

Authors: M. H. Arslan, U. K. Sahin, H. Acikgoz-Tufan, I. Gocek, I. Erdem

Abstract:

Various exterior factors have perpetual effects on textile materials during wear, use and laundering in everyday life. In accordance with their frequency of use, textile materials are required to be laundered at certain intervals. The medium in which the laundering process takes place have inevitable detrimental physical and chemical effects on textile materials caused by the unique parameters of the process inherently existing. Connatural structures of various textile materials result in many different physical, chemical and mechanical characteristics. Because of their specific structures, these materials have different behaviors against several exterior factors. By modeling the behavior of commercial and test textiles as group-wise against laundering process, it is possible to disclose the relation in between these two groups of materials, which will lead to better understanding of their behaviors in terms of similarities and differences against the washing parameters of the laundering. Thus, the goal of the current research is to examine the behavior of two groups of textile materials as commercial textiles and as test textiles towards the main washing machine parameters during laundering process such as temperature, load quantity, mechanical action and level of water amount by concentrating on shrinkage, pilling, sewing defects, collar abrasion, the other defects other than sewing, whitening and overall properties of textiles. In this study, cotton fabrics were preferred as commercial textiles due to the fact that garments made of cotton are the most demanded products in the market by the textile consumers in daily life. Full factorial experimental set-up was used to design the experimental procedure. All profiles always including all of the commercial and the test textiles were laundered for 20 cycles by commercial home laundering machine to investigate the effects of the chosen parameters. For the laundering process, a modified version of ‘‘IEC 60456 Test Method’’ was utilized. The amount of detergent was altered as 0.5% gram per liter depending on varying load quantity levels. Datacolor 650®, EMPA Photographic Standards for Pilling Test and visual examination were utilized to test and characterize the textiles. Furthermore, in the current study the relation in between commercial and test textiles in terms of their performance was deeply investigated by the help of statistical analysis performed by MINITAB® package program modeling their behavior against the parameters of the laundering process. In the experimental work, the behaviors of both groups of textiles towards washing machine parameters were visually and quantitatively assessed in dry state.

Keywords: behavior against washing machine parameters, performance evaluation of textiles, statistical analysis, commercial and test textiles

Procedia PDF Downloads 339
533 Cognitive Performance and Everyday Functionality in Healthy Greek Seniors

Authors: George Pavlidis, Ana Vivas

Abstract:

The demographic change into an aging population has stimulated the examination of seniors’ mental health and ability to live independently. The corresponding literature depicts the relation between cognitive decline and everyday functionality with aging, focusing largely in individuals that are reaching or have bridged the threshold of various forms of neuropathology and disability. In this context, recent meta-analysis depicts a moderate relation between cognitive performance and everyday functionality in AD sufferers. However, there has not been an analogous effort for the examination of this relation in the healthy spectrum of aging (i.e, in samples that are not challenged from a neurodegenerative disease). There is a consensus that the assessment tools designed to detect neuropathology with those that assess cognitive performance in healthy adults are distinct, thus their universal use in cognitively challenged and in healthy adults is not always valid. The same accounts for the assessment of everyday functionality. In addition, it is argued that everyday functionality should be examined with cultural adjusted assessment tools, since many vital everyday tasks are heterotypical among distinct cultures. Therefore, this study was set out to examine the relation between cognitive performance and everyday functionality a) in the healthy spectrum of aging and b) by adjusting the everyday functionality tools EPT and OTDL-R in the Greek cultural context. In Greece, 107 cognitively healthy seniors ( Mage = 62.24) completed a battery of neuropsychological tests and everyday functionality tests. Both were carefully chosen to be sensitive in fluctuations of performance in the healthy spectrum of cognitive performance and everyday functionality. The everyday functionality assessment tools were modified to reflect the local cultural context (i.e., EPT-G and OTDL-G). The results depicted that performance in all everyday functionality measures decline with age (.197 < r > .509). Statistically significant correlations emerged between cognitive performance and everyday functionality assessments that range from r =0.202 to r=0.510. A series of independent regression analysis including the scores of cognitive assessments has yield statistical significant models that explained 20.9 < AR2 > 32.4 of the variance in everyday functionality scored indexes. All everyday functionality measures were independently predicted by the TMT B-A index, and indicator of executive function. Stepwise regression analyses depicted that TMT B-A and age were statistically significant independent predictors of EPT-G and OTDL-G. It was concluded that everyday functionality is declining with age and that cognitive performance and everyday functional may be related in the healthy spectrum of aging. Age seems not to be the sole contributing factor in everyday functionality decline, rather executive control as well. Moreover, it was concluded that the EPT-G and OTDL-G are valuable tools to assess everyday functionality in Greek seniors that are not cognitively challenged, especially for research purposes. Future research should examine the contributing factors of a better cognitive vitality especially in executive control, as vital for the maintenance of independent living capacity with aging.

Keywords: cognition, everyday functionality, aging, cognitive decline, healthy aging, Greece

Procedia PDF Downloads 510
532 Analyzing Water Waves in Underground Pumped Storage Reservoirs: A Combined 3D Numerical and Experimental Approach

Authors: Elena Pummer, Holger Schuettrumpf

Abstract:

By today underground pumped storage plants as an outstanding alternative for classical pumped storage plants do not exist. They are needed to ensure the required balance between production and demand of energy. As a short to medium term storage pumped storage plants have been used economically over a long period of time, but their expansion is limited locally. The reasons are in particular the required topography and the extensive human land use. Through the use of underground reservoirs instead of surface lakes expansion options could be increased. Fulfilling the same functions, several hydrodynamic processes result in the specific design of the underground reservoirs and must be implemented in the planning process of such systems. A combined 3D numerical and experimental approach leads to currently unknown results about the occurring wave types and their behavior in dependence of different design and operating criteria. For the 3D numerical simulations, OpenFOAM was used and combined with an experimental approach in the laboratory of the Institute of Hydraulic Engineering and Water Resources Management at RWTH Aachen University, Germany. Using the finite-volume method and an explicit time discretization, a RANS-Simulation (k-ε) has been run. Convergence analyses for different time discretization, different meshes etc. and clear comparisons between both approaches lead to the result, that the numerical and experimental models can be combined and used as hybrid model. Undular bores partly with secondary waves and breaking bores occurred in the underground reservoir. Different water levels and discharges change the global effects, defined as the time-dependent average of the water level as well as the local processes, defined as the single, local hydrodynamic processes (water waves). Design criteria, like branches, directional changes, changes in cross-section or bottom slope, as well as changes in roughness have a great effect on the local processes, the global effects remain unaffected. Design calculations for underground pumped storage plants were developed on the basis of existing formulae and the results of the hybrid approach. Using the design calculations reservoirs heights as well as oscillation periods can be determined and lead to the knowledge of construction and operation possibilities of the plants. Consequently, future plants can be hydraulically optimized applying the design calculations on the local boundary conditions.

Keywords: energy storage, experimental approach, hybrid approach, undular and breaking Bores, 3D numerical approach

Procedia PDF Downloads 200
531 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices

Authors: S. Srinivasan, E. Cretu

Abstract:

The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.

Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape

Procedia PDF Downloads 119
530 Influence of Atmospheric Circulation Patterns on Dust Pollution Transport during the Harmattan Period over West Africa

Authors: Ayodeji Oluleye

Abstract:

This study used Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) and reanalysis dataset of thirty years (1983-2012) to investigate the influence of the atmospheric circulation on dust transport during the Harmattan period over WestAfrica using TOMS data. The Harmattan dust mobilization and atmospheric circulation pattern were evaluated using a kernel density estimate which shows the areas where most points are concentrated between the variables. The evolution of the Inter-Tropical Discontinuity (ITD), Sea surface Temperature (SST) over the Gulf of Guinea, and the North Atlantic Oscillation (NAO) index during the Harmattan period (November-March) was also analyzed and graphs of the average ITD positions, SST and the NAO were observed on daily basis. The Pearson moment correlation analysis was also employed to assess the effect of atmospheric circulation on Harmattan dust transport. The results show that the departure (increased) of TOMS AI values from the long-term mean (1.64) occurred from around 21st of December, which signifies the rich dust days during winter period. Strong TOMS AI signal were observed from January to March with the maximum occurring in the latter months (February and March). The inter-annual variability of TOMSAI revealed that the rich dust years were found between 1984-1985, 1987-1988, 1997-1998, 1999-2000, and 2002-2004. Significantly, poor dust year was found between 2005 and 2006 in all the periods. The study has found strong north-easterly (NE) trade winds were over most of the Sahelianregion of West Africa during the winter months with the maximum wind speed reaching 8.61m/s inJanuary.The strength of NE winds determines the extent of dust transport to the coast of Gulf of Guinea during winter. This study has confirmed that the presence of the Harmattan is strongly dependent on theSST over Atlantic Ocean and ITD position. The locus of the average SST and ITD positions over West Africa could be described by polynomial functions. The study concludes that the evolution of near surface wind field at 925 hpa, and the variations of SST and ITD positions are the major large scale atmospheric circulation systems driving the emission, distribution, and transport of Harmattan dust aerosols over West Africa. However, the influence of NAO was shown to have fewer significance effects on the Harmattan dust transport over the region.

Keywords: atmospheric circulation, dust aerosols, Harmattan, West Africa

Procedia PDF Downloads 295
529 Botulinum Toxin a in the Treatment of Late Facial Nerve Palsy Complications

Authors: Akulov M. A., Orlova O. R., Zaharov V. O., Tomskij A. A.

Abstract:

Introduction: One of the common postoperative complications of posterior cranial fossa (PCF) and cerebello-pontine angle tumor treatment is a facial nerve palsy, which leads to multiple and resistant to treatment impairments of mimic muscles structure and functions. After 4-6 months after facial nerve palsy with insufficient therapeutic intervention patients develop a postparalythic syndrome, which includes such symptoms as mimic muscle insufficiency, mimic muscle contractures, synkinesis and spontaneous muscular twitching. A novel method of treatment is the use of a recent local neuromuscular blocking agent– botulinum toxin A (BTA). Experience of BTA treatment enables an assumption that it can be successfully used in late facial nerve palsy complications to significantly increase quality of life of patients. Study aim. To evaluate the efficacy of botulinum toxin A (BTA) (Xeomin) treatment in patients with late facial nerve palsy complications. Patients and Methods: 31 patients aged 27-59 years 6 months after facial nerve palsy development were evaluated. All patients received conventional treatment, including massage, movement therapy etc. Facial nerve palsy developed after acoustic nerve tumor resection in 23 (74,2%) patients, petroclival meningioma resection – in 8 (25,8%) patients. The first group included 17 (54,8%) patients, receiving BT-therapy; the second group – 14 (45,2%) patients continuing conventional treatment. BT-injections were performed in synkinesis or contracture points 1-2 U on injured site and 2-4 U on healthy side (for symmetry). Facial nerve function was evaluated on 2 and 4 months of therapy according to House-Brackman scale. Pain syndrome alleviation was assessed on VAS. Results: At baseline all patients in the first and second groups demonstrated аpostparalytic syndrome. We observed a significant improvement in patients receiving BTA after only one month of treatment. Mean VAS score at baseline was 80,4±18,7 and 77,9±18,2 in the first and second group, respectively. In the first group after one month of treatment we observed a significant decrease of pain syndrome – mean VAS score was 44,7±10,2 (р<0,01), whereas in the second group VAS score was as high as 61,8±9,4 points (p>0,05). By the 3d month of treatment pain syndrome intensity continued to decrease in both groups, but, the first group demonstrated significantly better results; mean score was 8,2±3,1 and 31,8±4,6 in the first and second group, respectively (р<0,01). Total House-Brackman score at baseline was 3,67±0,16 in the first group and 3,74±0,19 in the second group. Treatment resulted in a significant symptom improvement in the first group, with no improvement in the second group. After 4 months of treatment House-Brockman score in the first group was 3,1-fold lower, than in the second group (р<0,05). Conclusion: Botulinum toxin injections decrease postparalytic syndrome symptoms in patients with facial nerve palsy.

Keywords: botulinum toxin, facial nerve palsy, postparalytic syndrome, synkinesis

Procedia PDF Downloads 280
528 Mild Auditory Perception and Cognitive Impairment in mid-Trimester Pregnancy

Authors: Tahamina Begum, Wan Nor Azlen Wan Mohamad, Faruque Reza, Wan Rosilawati Wan Rosli

Abstract:

To assess auditory perception and cognitive function during pregnancy is necessary as the pregnant women need extra effort for attention mainly for their executive function to maintain their quality of life. This study aimed to investigate neural correlates of cognitive and behavioral processing during mid trimester pregnancy. Event-Related Potentials (ERPs) were studied by using 128-sensor net and PAS or COWA (controlled Oral Word Association), WCST (Wisconsin Card Sorting Test), RAVLTIM (Rey Auditory Verbal and Learning Test: immediate or interference recall, delayed recall (RAVLT DR) and total score (RAVLT TS) were tested for neuropsychology assessment. In total 18 subjects were recruited (n= 9 in each group; control and pregnant group). All participants of the pregnant group were within 16-27 (mid trimester) weeks gestation. Age and education matched control healthy subjects were recruited in the control group. Participants were given a standardized test of auditory cognitive function as auditory oddball paradigm during ERP study. In this paradigm, two different auditory stimuli (standard and target stimuli) were used where subjects counted silently only target stimuli with giving attention by ignoring standard stimuli. Mean differences between target and standard stimuli were compared across groups. N100 (auditory sensory ERP component) and P300 (auditory cognitive ERP component) were recorded at T3, T4, T5, T6, Cz and Pz electrode sites. An equal number of electrodes showed non-significantly shorter amplitude of N100 component (except significantly shorter at T3, P= 0.05) and non-significant longer latencies (except significantly longer latency at T5, P= 0.008) of N100 component in pregnant group comparing control. In case of P300 component, maximum electrode sites showed non-significantly higher amplitudes and equal number of sites showed non-significant shorter latencies in pregnant group comparing control. Neuropsychology results revealed the non-significant higher score of PAS, lower score of WCST, lower score of RAVLTIM and RAVLTDR in pregnant group comparing control. The results of N100 component and RAVLT scores concluded that auditory perception is mildly impaired and P300 component proved very mild cognitive dysfunction with good executive functions in second trimester of pregnancy.

Keywords: auditory perception, pregnancy, stimuli, trimester

Procedia PDF Downloads 361
527 Lessons Learned from Push-Plus Implementation in Northern Nigeria

Authors: Aisha Giwa, Mohammed-Faosy Adeniran, Olufunke Femi-Ojo

Abstract:

Four decades ago, the World Health Organization (WHO) launched the Expanded Programme on Immunization (EPI). The EPI blueprint laid out the technical and managerial functions necessary to routinely vaccinate children with a limited number of vaccines, providing protection against diphtheria, tetanus, whooping cough, measles, polio, and tuberculosis, and to prevent maternal and neonatal tetanus by vaccinating women of childbearing age with tetanus toxoid. Despite global efforts, the Routine Immunization (RI) coverage in two of the World Health Organization (WHO) regions; the African Region and the South-East Asia Region, still remains short of its targets. As a result, the WHO Regional Director for Africa declared 2012 as the year for intensifying RI in these regions and this also coincided with the declaration of polio as a programmatic emergency by the WHO Executive Board. In order to intensify routine immunization, the National Routine Immunization Strategic Plan (2013-2015) stated that its core priority is to ensure 100% adequacy and availability of vaccines for safe immunization. To achieve 100% availability, the “PUSH System” and then “Push-Plus” were adopted for vaccine distribution, which replaced the inefficient “PULL” method. The NPHCDA plays the key role in coordinating activities in area advocacy, capacity building, engagement of 3PL for the state as well as monitoring and evaluation of the vaccine delivery process. eHealth Africa (eHA) is a player as a 3PL service provider engaged by State Primary Health Care Boards (SPHCDB) to ensure vaccine availability through Vaccine Direct Delivery (VDD) project which is essential to successful routine immunization services. The VDD project ensures the availability and adequate supply of high-quality vaccines and immunization-related materials to last-mile facilities. eHA’s commitment to the VDD project saw the need for an assessment of the project vis-a-vis the overall project performance, evaluation of a process for necessary improvement suggestions as well as general impact across Kano State (Where eHA had transitioned to the state), Bauchi State (currently manage delivery to all LGAs except 3 LGAs currently being managed by the state), Sokoto State (eHA currently covers all LGAs) and Zamfara State (Currently, in-sourced and managed solely by the state).

Keywords: cold chain logistics, health supply chain system strengthening, logistics management information system, vaccine delivery traceability and accountability

Procedia PDF Downloads 272
526 One Year Follow up of Head and Neck Paragangliomas: A Single Center Experience

Authors: Cecilia Moreira, Rita Paiva, Daniela Macedo, Leonor Ribeiro, Isabel Fernandes, Luis Costa

Abstract:

Background: Head and neck paragangliomas are a rare group of tumors with a large spectrum of clinical manifestations. The approach to evaluate and treat these lesions has evolved over the last years. Surgery was the standard for the approach of these patients, but nowadays new techniques of imaging and radiation therapy changed that paradigm. Despite advances in treating, the growth potential and clinical outcome of individual cases remain largely unpredictable. Objectives: Characterization of our institutional experience with clinical management of these tumors. Methods: This was a cross-sectional study of patients followed in our institution between 01 January and 31 December 2017 with paragangliomas of the head and neck and cranial base. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic workup, treatment modality, tumor control and recurrence, complications of treatment and hereditary status were collected and summarized. Results: A total of four female patients were followed between 01 January and 31 December 2017 in our institution. The mean age of our cohort was 53 (± 16.1) years. The primary locations were at the level of the tympanic jug (n=2, 50%) and carotid body (n=2, 50%), and only one of the tumors of the carotid body presented pulmonary metastasis at the time of diagnosis. None of the lesions were catecholamine-secreting. Two patients underwent genetic testing, with no mutations identified. The initial clinical presentation was variable highlighting the decrease of visual acuity and headache as symptoms present in all patients. In one of the cases, loss of all teeth of the lower jaw was the presenting symptomatology. Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were employed as treatment approaches according to anatomical location and resectability of lesions. As post-therapeutic sequels the persistence of tinnitus and disabling pain stands out, presenting one of the patients neuralgia of the glossopharyngeal. Currently, all patients are under regular surveillance with a median follow up of 10 months. Conclusion: Ultimately, clinical management of these tumors remains challenging owing to heterogeneity in clinical presentation, the existence of multiple treatment alternatives, and potential to cause serious detriment to critical functions and consequently interference with the quality of life of the patients.

Keywords: clinical outcomes, head and neck, management, paragangliomas

Procedia PDF Downloads 128
525 Oxidative Stress Related Alteration of Mitochondrial Dynamics in Cellular Models

Authors: Orsolya Horvath, Laszlo Deres, Krisztian Eros, Katalin Ordog, Tamas Habon, Balazs Sumegi, Kalman Toth, Robert Halmosi

Abstract:

Introduction: Oxidative stress induces an imbalance in mitochondrial fusion and fission processes, finally leading to cell death. The two antioxidant molecules, BGP-15 and L2286 have beneficial effects on mitochondrial functions and on cellular oxidative stress response. In this work, we studied the effects of these compounds on the processes of mitochondrial quality control. Methods: We used H9c2 cardiomyoblast and isolated neonatal rat cardiomyocytes (NRCM) for the experiments. The concentration of stressors and antioxidants was beforehand determined with MTT test. We applied 1-Methyl-3-nitro-1-nitrosoguanidine (MNNG) in 125 µM, 400 µM and 800 µM concentrations for 4 and 8 hours on H9c2 cells. H₂O₂ was applied in 150 µM and 300 µM concentration for 0.5 and 4 hours on both models. L2286 was administered in 10 µM, while BGP-15 in 50 µM doses. Cellular levels of the key proteins playing role in mitochondrial dynamics were measured in Western blot samples. For the analysis of mitochondrial network dynamics, we applied electron microscopy and immunocytochemistry. Results: Due to MNNG treatment the level of fusion proteins (OPA1, MFN2) decreased, while the level of fission protein DRP1 elevated markedly. The levels of fusion proteins OPA1 and MNF2 increased in the L2286 and BGP-15 treated groups. During the 8 hour treatment period, the level of DRP1 also increased in the treated cells (p < 0.05). In the H₂O₂ stressed cells, administration of L2286 increased the level of OPA1 in both H9c2 and NRCM models. MFN2 levels in isolated neonatal rat cardiomyocytes raised considerably due to BGP-15 treatment (p < 0.05). L2286 administration decreased the DRP1 level in H9c2 cells (p < 0.05). We observed that the H₂O₂-induced mitochondrial fragmentation could be decreased by L2286 treatment. Conclusion: Our results indicated that the PARP-inhibitor L2286 has beneficial effect on mitochondrial dynamics during oxidative stress scenario, and also in the case of directly induced DNA damage. We could make the similar conclusions in case of BGP-15 administration, which, via reducing ROS accumulation, propagates fusion processes, this way aids preserving cellular viability. Funding: GINOP-2.3.2-15-2016-00049; GINOP-2.3.2-15-2016-00048; GINOP-2.3.3-15-2016-00025; EFOP-3.6.1-16-2016-00004; ÚNKP-17-4-I-PTE-209

Keywords: H9c2, mitochondrial dynamics, neonatal rat cardiomyocytes, oxidative stress

Procedia PDF Downloads 134
524 CO₂ Conversion by Low-Temperature Fischer-Tropsch

Authors: Pauline Bredy, Yves Schuurman, David Farrusseng

Abstract:

To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.

Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process

Procedia PDF Downloads 41
523 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 270
522 A (Morpho) Phonological Typology of Demonstratives: A Case Study in Sound Symbolism

Authors: Seppo Kittilä, Sonja Dahlgren

Abstract:

In this paper, a (morpho)phonological typology of proximal and distal demonstratives is proposed. Only the most basic proximal (‘this’) and distal (‘that’) forms have been considered, potential more fine-grained distinctions based on proximity are not relevant to our discussion, nor are the other functions the discussed demonstratives may have. The sample comprises 82 languages that represent the linguistic diversity of the world’s languages, although the study is not based on a systematic sample. Four different major types are distinguished; (1) Vowel type: front vs. back; high vs. low vowel (2) Consonant type: front-back consonants (3) Additional element –type (4) Varia. The proposed types can further be subdivided according to whether the attested difference concern only, e.g., vowels, or whether there are also other changes. For example, the first type comprises both languages such as Betta Kurumba, where only the vowel changes (i ‘this’, a ‘that’) and languages like Alyawarra (nhinha vs. nhaka), where there are also other changes. In the second type, demonstratives are distinguished based on whether the consonants are front or back; typically front consonants (e.g., labial and dental) appear on proximal demonstratives and back consonants on distal demonstratives (such as velar or uvular consonants). An example is provided by Bunaq, where bari marks ‘this’ and baqi ‘that’. In the third type, distal demonstratives typically have an additional element, making it longer in form than the proximal one (e.g., Òko òne ‘this’, ònébé ‘that’), but the type also comprises languages where the distal demonstrative is simply phonologically longer (e.g., Ngalakan nu-gaʔye vs. nu-gunʔbiri). Finally, the last type comprises cases that do not fit into the three other types, but a number of strategies are used by the languages of this group. The two first types can be explained by iconicity; front or high phonemes appear on the proximal demonstratives, while back/low phonemes are related to distal demonstratives. This means that proximal demonstratives are pronounced at the front and/or high part of the oral cavity, while distal demonstratives are pronounced lower and more back, which reflects the proximal/distal nature of their referents in the physical world. The first type is clearly the most common in our data (40/82 languages), which suggests a clear association with iconicity. Our findings support earlier findings that proximal and distal demonstratives have an iconic phonemic manifestation. For example, it has been argued that /i/ is related to smallness (small distance). Consonants, however, have not been considered before, or no systematic correspondences have been discovered. The third type, in turn, can be explained by markedness; the distal element is more marked than the proximal demonstrative. Moreover, iconicity is relevant also here: some languages clearly use less linguistic substance for referring to entities close to the speaker, which is manifested in the longer (morpho)phonological form of the distal demonstratives. The fourth type contains different kinds of cases, and systematic generalizations are hard to make.

Keywords: demonstratives, iconicity, language typology, phonology

Procedia PDF Downloads 136
521 Effect of Pulsed Electrical Field on the Mechanical Properties of Raw, Blanched and Fried Potato Strips

Authors: Maria Botero-Uribe, Melissa Fitzgerald, Robert Gilbert, Kim Bryceson, Jocelyn Midgley

Abstract:

French fry manufacturing involves a series of processes in which structural properties of potatoes are modified to produce crispy french fries which consumers enjoy. In addition to the traditional french fry manufacturing process, the industry is applying a relatively new process called pulsed electrical field (PEF) to the whole potatoes. There is a wealth of information on the technical treatment conditions of PEF, however, there is a lack of information about its effect on the structural properties that affect texture and its synergistic interactions with the other manufacturing steps of french fry production. The effect of PEF on starch gelatinisation properties of Russet Burbank potato was measured using a Differential Scanning Calorimeter. Cation content (K+, Ca2+ and Mg2+) was determined by inductively coupled plasma optical emission spectrophotometry. Firmness, and toughness of raw and blanched potatoes were determined in an uniaxial compression test. Moisture content was determined in a vacuum oven and oil content was measured using the soxhlet system with hexane. The final texture of the french fries – crispness - was determined using a three bend point test. Triangle tests were conducted to determine if consumers were able to perceive sensory differences between French fries that were PEF treated and those without treatment. The concentration of K+, Ca2+ and Mg2+ decreased significantly in the raw potatoes after the PEF treatment. The PEF treatment significantly increased modulus of elasticity, compression strain, compression force and toughness in the raw potato. The PEF-treated raw potato were firmer and stiffer, and its structure integrity held together longer, resisted higher force before fracture and stretched further than the untreated ones. The strain stress relationship exhibited by the PEF-treated raw potato could be due to an increase in the permeability of the plasmalema and tonoplasm allowing Ca2+ and Mg2+ cations to reach the cell wall and middle lamella, and be available for cross linking with the pectin molecule. The PEF-treated raw potato exhibited a slightly higher onset gelatinisation temperatures, similar peak temperatures and lower gelatinisation ranges than the untreated raw potatoes. The final moisture content of the french fries was not significantly affected by the PEF treatment. Oil content in the PEF- treated potatoes was lower than the untreated french fries, however, not statistically significant at 5 %. The PEF treatment did not have an overall significant effect on french fry crispness (modulus of elasticity), flexure stress or strain. The triangle tests show that most consumers could not detect a difference between French fries that received a PEF treatment from those that did not.

Keywords: french fries, mechanical properties, PEF, potatoes

Procedia PDF Downloads 225
520 Personality Characteristics Managerial Skills and Career Preference

Authors: Dinesh Kumar Srivastava

Abstract:

After liberalization of the economy, technical education has seen rapid growth in India. A large number of institutions are offering various engineering and management programmes. Every year, a number of students complete B. Tech/M. Tech and MBA programmes of different institutes, universities in India and search for jobs in the industry. A large number of companies visit educational institutes for campus placements. These companies are interested in hiring competent managers. Most students show preference for jobs from reputed companies and jobs having high compensation. In this context, this study was conducted to understand career preference of postgraduate students and junior executives. Personality characteristics influence work life as well as personal life. In the last two decades, five factor model of personality has been found to be a valid predictor of job performance and job satisfaction. This approach has received support from studies conducted in different countries. It includes neuroticism, extraversion, and openness to experience, agreeableness, and conscientiousness. Similarly three social needs, namely, achievement, affiliation and power influence motivation and performance in certain job functions. Both approaches have been considered in the study. The objective of the study was first, to analyse the relationship between personality characteristics and career preference of students and executives. Secondly, the study analysed the relationship between personality characteristics and skills of students. Three managerial skills namely, conceptual, human and technical have been considered in the study. The sample size of the study was 266 including postgraduate students and junior executives. Respondents have completed BE/B. Tech/MBA programme. Three dimensions of career preference namely, identity, variety and security and three managerial skills were considered as dependent variables. The results indicated that neuroticism was not related to any dimension of career preference. Extraversion was not related to identity, variety and security. It was positively related to three skills. Openness to experience was positively related to skills. Conscientiousness was positively related to variety. It was positively related to three skills. Similarly, the relationship between social needs and career preference was examined using correlation. The results indicated that need for achievement was positively related to variety, identity and security. Need for achievement was positively related to managerial skills Need for affiliation was positively related to three dimensions of career preference as well as managerial skills Need for power was positively related to three dimensions of career preference and managerial skills Social needs appear to be stronger predictor of career preference and managerial skills than big five traits. Findings have implications for selection process in industry.

Keywords: big five traits, career preference, personality, social needs

Procedia PDF Downloads 261
519 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 74
518 Response of Caldeira De Tróia Saltmarsh to Sea Level Rise, Sado Estuary, Portugal

Authors: A. G. Cunha, M. Inácio, M. C. Freitas, C. Antunes, T. Silva, C. Andrade, V. Lopes

Abstract:

Saltmarshes are essential ecosystems both from an ecological and biological point of view. Furthermore, they constitute an important social niche, providing valuable economic and protection functions. Thus, understanding their rates and patterns of sedimentation is critical for functional management and rehabilitation, especially in an SLR scenario. The Sado estuary is located 40 km south of Lisbon. It is a bar built estuary, separated from the sea by a large sand spit: the Tróia barrier. Caldeira de Tróia is located on the free edge of this barrier, and encompasses a salt marsh with ca. 21,000 m². Sediment cores were collected in the high and low marshes and in the mudflat area of the North bank of Caldeira de Tróia. From the low marsh core, fifteen samples were chosen for ²¹⁰Pb and ¹³⁷Cs determination at University of Geneva. The cores from the high marsh and the mudflat are still being analyzed. A sedimentation rate of 2.96 mm/year was derived from ²¹⁰Pb using the Constant Flux Constant Sedimentation model. The ¹³⁷Cs profile shows a peak in activity (1963) between 15.50 and 18.50 cm, giving a 3.1 mm/year sedimentation rate for the past 53 years. The adopted sea level rise scenario was based on a model built with the initial rate of SLR of 2.1 mm/year in 2000 and an acceleration of 0.08 mm/year². Based on the harmonic analysis of Setubal-Tróia tide gauge of 2005 data, the tide model was estimated and used to build the tidal tables to the period 2000-2016. With these tables, the average mean water levels were determined for the same time span. A digital terrain model was created from LIDAR scanning with 2m horizontal resolution (APA-DGT, 2011) and validated with altimetric data obtained with a DGPS-RTK. The response model calculates a new elevation for each pixel of the DTM for 2050 and 2100 based on the sedimentation rates specific of each environment. At this stage, theoretical values were chosen for the high marsh and the mudflat (respectively, equal and double the low marsh rate – 2.92 mm/year). These values will be rectified once sedimentation rates are determined for the other environments. For both projections, the total surface of the marsh decreases: 2% in 2050 and 61% in 2100. Additionally, the high marsh coverage diminishes significantly, indicating a regression in terms of maturity.

Keywords: ¹³⁷Cs, ²¹⁰Pb, saltmarsh, sea level rise, response model

Procedia PDF Downloads 238
517 The Spatial Analysis of Wetland Ecosystem Services Valuation on Flood Protection in Tone River Basin

Authors: Tingting Song

Abstract:

Wetlands are significant ecosystems that provide a variety of ecosystem services for humans, such as, providing water and food resources, purifying water quality, regulating climate, protecting biodiversity, and providing cultural, recreational, and educational resources. Wetlands also provide benefits, such as reduction of flood, storm damage, and soil erosion. The flood protection ecosystem services of wetlands are often ignored. Due to climate change, the flood caused by extreme weather in recent years occur frequently. Flood has a great impact on people's production and life with more and more economic losses. This study area is in the Tone river basin in the Kanto area, Japan. It is the second-longest river with the largest basin area in Japan, and it is still suffering heavy economic losses from floods. Tone river basin is one of the rivers that provide water for Tokyo and has an important impact on economic activities in Japan. The purpose of this study was to investigate land-use changes of wetlands in the Tone River Basin, and whether there are spatial differences in the value of wetland functions in mitigating economic losses caused by floods. This study analyzed the land-use change of wetland in Tone River, based on the Landsat data from 1980 to 2020. Combined with flood economic loss, wetland area, GDP, population density, and other social-economic data, a geospatial weighted regression model was constructed to analyze the spatial difference of wetland ecosystem service value. Now, flood protection mainly relies on such a hard project of dam and reservoir, but excessive dependence on hard engineering will cause the government huge financial pressure and have a big impact on the ecological environment. However, natural wetlands can also play a role in flood management, at the same time they can also provide diverse ecosystem services. Moreover, the construction and maintenance cost of natural wetlands is lower than that of hard engineering. Although it is not easy to say which is more effective in terms of flood management. When the marginal value of a wetland is greater than the economic loss caused by flood per unit area, it may be considered to rely on the flood storage capacity of the wetland to reduce the impact of the flood. It can promote the sustainable development of wetlands ecosystem. On the other hand, spatial analysis of wetland values can provide a more effective strategy for flood management in the Tone river basin.

Keywords: wetland, geospatial weighted regression, ecosystem services, environment valuation

Procedia PDF Downloads 85
516 Microglia Activation in Animal Model of Schizophrenia

Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg

Abstract:

Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.

Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia

Procedia PDF Downloads 404
515 Mental Health Awareness and Help Seeking Among Adolescents in Kerala

Authors: Fathima M. A., Milu Maria Anto

Abstract:

Aim: The current study aims to explore the understanding about Mental Health and the likelihood to seek help for mental health problems among adolescents in the state of Kerala (India). Method: A cross sectional exploratory design was used. Samples were selected using convenience sampling. Ninety nine high school and higher secondary school students who had enrolled in the program “Responsible Adolescents (READ)” organized by MKMS Education from Kerala participated in this study. The data for the present study was collected using google forms prior to the commencement of the READ programme. Open-ended questions were used to explore the understanding of participants about mental health, mental health problems, causes of mental health problems and the role of mental health professionals. The likelihood to seek help (from friends, parents, teachers and mental health professionals) for mental health problems was assessed using a visual analogue scale. Further open-ended questions were used to identify what changes in teachers and parents will make them feel more comfortable to approach them when they need help. Content analysis was used to identify themes and coded data was further analyzed using correlation. Results: The results show that students have a fair idea about what Mental Health is. Even though the majority is familiar with the names of mental health disorders, relatively fewer students identify it as irregularity in mental functions such as thoughts, emotions and behaviors. The students tend to attribute symptoms of mental health problems as the cause of mental health problems. Very few students have the understanding that biological variations and adverse childhood experiences are primary causes for the development of mental health problems. Less than half of the students were aware of the role of psychiatrists and psychologists in mental health treatment. The students were more likely to seek help from parents and friends during distress. They had a medium inclination to seek help from mental health professionals and showed even lower likelihood to seek help from teachers. The majority of the students responded that they would be more comfortable approaching teachers if they were more open-minded and approachable as well as non-judgmental and non-dismissive. Conclusion: Findings show that there is inadequate awareness among adolescents about mental health problems and their causes. There is a lack of understanding about the roles of two main mental health professionals which can pose a big hurdle in accessing adequate help from the appropriate professional at the right time. The low likelihood to seek help from teachers for mental health problems is very concerning. The major barriers reported by the students in seeking help from teachers were the judgmental and dismissive approach. The findings throw light on the current level of awareness about mental health and mental health help-seeking, which can be utilized in framing mental health awareness programs for students as well as teachers.

Keywords: Mental Health Awareness, Adolescent Mental Health, Help Seeking Behavior, School Mental Health

Procedia PDF Downloads 246
514 Investigation of the Carbon Dots Optical Properties Using Laser Scanning Confocal Microscopy and TimE-resolved Fluorescence Microscopy

Authors: M. S. Stepanova, V. V. Zakharov, P. D. Khavlyuk, I. D. Skurlov, A. Y. Dubovik, A. L. Rogach

Abstract:

Carbon dots are small carbon-based spherical nanoparticles, which are typically less than 10 nm in size that can be modified with surface passivation and heteroatoms doping. The light-absorbing ability of carbon dots has attracted a significant amount of attention in photoluminescence for bioimaging and fluorescence sensing applications owing to their advantages, such as tunable fluorescence emission, photo- and thermostability and low toxicity. In this study, carbon dots were synthesized by the solvothermal method from citric acid and ethylenediamine dissolved in water. The solution was heated for 5 hours at 200°C and then cooled down to room temperature. The carbon dots films were obtained by evaporation from a high-concentration aqueous solution. The increase of both luminescence intensity and light transmission was obtained as a result of a 405 nm laser exposure to a part of the carbon dots film, which was detected using a confocal laser scanning microscope (LSM 710, Zeiss). Blueshift up to 35 nm of the luminescence spectrum is observed as luminescence intensity, which is increased more than twofold. The exact value of the shift depends on the time of the laser exposure. This shift can be caused by the modification of surface groups at the carbon dots, which are responsible for long-wavelength luminescence. In addition, a shift of the absorption peak by 10 nm and a decrease in the optical density at the wavelength of 350 nm is detected, which is responsible for the absorption of surface groups. The obtained sample was also studied with time-resolved confocal fluorescence microscope (MicroTime 100, PicoQuant), which made it possible to receive a time-resolved photoluminescence image and construct emission decays of the laser-exposed and non-exposed areas. 5 MHz pulse rate impulse laser has been used as a photoluminescence excitation source. Photoluminescence decay was approximated by two exhibitors. The laser-exposed area has the amplitude of the first-lifetime component (A1) twice as much as before, with increasing τ1. At the same time, the second-lifetime component (A2) decreases. These changes evidence a modification of the surface groups of carbon dots. The detected effect can be used to create thermostable fluorescent marks, the physical size of which is bounded by the diffraction limit of the optics (~ 200-300 nm) used for exposure and to improve the optical properties of carbon dots or in the field of optical encryption. Acknowledgements: This work was supported by the Ministry of Science and Higher Education of Russian Federation, goszadanie no. 2019-1080 and financially supported by Government of Russian Federation, Grant 08-08.

Keywords: carbon dots, photoactivation, optical properties, photoluminescence and absorption spectra

Procedia PDF Downloads 148
513 Recognizing Juxtaposition Patterns of the Dwelling Units in Housing Cluster: The Case Study of Aghayan Complex: An Example of Rural Residential Development in Qajar Era in Iran

Authors: Outokesh Fatemeh, Jourabchi Keivan, Talebi Maryam, Nikbakht Fatemeh

Abstract:

Mayamei is a small town in Iran that is located between Shahrud and Sabzevar cities, on the Silk Road. It enjoys a history of approximately 1000 years. An alley entitled ‘Aghayan’ exists in this town that comprises residential buildings of a famous family. Bathhouse, mosque, telegraph center, cistern are all related to this alley. This architectural complex belongs to Sadat Mousavi, who is one of the Mayamei's major grandees and religious household. The alley after construction has been inherited from generation to generation within the family masters. The purpose of this study, which was conducted on Aghayan alley and its associated complex, was to elucidate Iranian vernacular domestic architecture of Qajar era in small towns and villages. We searched for large, medium, and small architectural patterns in the contemplated complex, and tried to elaborate their evolution from past to the present. The other objective of this project was finding a correlation between changes in the lifestyle of the alley’s inhabitants with the form of the building's architecture. Our investigation methods included: literature review especially in regard to historical travelogues, peer site visiting, mapping, interviewing of the elderly people of the Mousavi family (the owners), and examining the available documents especially the 4 meters’ scroll-type testament of 150 years ago. For the analysis of the aforementioned data, an effort was made to discover (1) the patterns of placing of different buildings in respect of the others, (2) finding the relation between function of the buildings with their relative location in the complex, as was considered in the original design, and (3) possible changes of functions of the buildings during the time. In such an investigation, special attention was paid to the chronological changes of lifestyles of the residents. In addition, we tried to take all different activities of the residents into account including their daily life activities, religious ceremonies, etc. By combining such methods, we were able to obtain a picture of the buildings in their original (construction) state, along with a knowledge of the temporal evolution of the architecture. An interesting finding is that the Aghayan complex seems to be a big structure of the horizontal type apartments, which are placed next to each other. The houses made in this way are connected to the adjacent neighbors both by the bifacial rooms and from the roofs.

Keywords: Iran, Qajar period, vernacular domestic architecture, life style, residential complex

Procedia PDF Downloads 149
512 Glycosaminoglycan, a Cartilage Erosion Marker in Synovial Fluid of Osteoarthritis Patients Strongly Correlates with WOMAC Function Subscale

Authors: Priya Kulkarni, Soumya Koppikar, Narendrakumar Wagh, Dhanshri Ingle, Onkar Lande, Abhay Harsulkar

Abstract:

Cartilage is an extracellular matrix composed of aggrecan, which imparts it with a great tensile strength, stiffness and resilience. Disruption in cartilage metabolism leading to progressive degeneration is a characteristic feature of Osteoarthritis (OA). The process involves enzymatic depolymerisation of cartilage specific proteoglycan, releasing free glycosaminoglycan (GAG). This released GAG in synovial fluid (SF) of knee joint serves as a direct measure of cartilage loss, however, limited due to its invasive nature. Western Ontario and McMaster Universities Arthritis Index (WOMAC) is widely used for assessing pain, stiffness and physical-functions in OA patients. The scale is comprised of three subscales namely, pain, stiffness and physical-function, intends to measure patient’s perspective of disease severity as well as efficacy of prescribed treatment. Twenty SF samples obtained from OA patients were analysed for their GAG values in SF using DMMB based assay. LK 1.0 vernacular version was used to attain WOMAC scale. The results were evaluated using SAS University software (Edition 1.0) for statistical significance. All OA patients revealed higher GAG values compared to the control value of 78.4±30.1µg/ml (obtained from our non-OA patients). Average WOMAC calculated was 51.3 while pain, stiffness and function estimated were 9.7, 3.9 and 37.7, respectively. Interestingly, a strong statistical correlation was established between WOMAC function subscale and GAG (p = 0.0102). This subscale is based on day-to-day activities like stair-use, bending, walking, getting in/out of car, rising from bed. However, pain and stiffness subscale did not show correlation with any of the studied markers and endorsed the atypical inflammation in OA pathology. On one side, where knee pain showed poor correlation with GAG, it is often noted that radiography is insensitive to cartilage degenerative changes; thus OA remains undiagnosed for long. Moreover, active cartilage degradation phase remains elusive to both, patient and clinician. Through analysis of large number of OA patients we have established a close association of Kellgren-Lawrence grades and increased cartilage loss. A direct attempt to correlate WOMAC and radiographic progression of OA with various biomarkers has not been attempted so far. We found a good correlation in GAG levels in SF and the function subscale.

Keywords: cartilage, Glycosaminoglycan, synovial fluid, western ontario and McMaster Universities Arthritis Index

Procedia PDF Downloads 428