Search results for: computational study
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 51089

Search results for: computational study

869 Introduction of a New and Efficient Nematicide, Abamectin by Gyah Corporation, Iran, for Root-knot Nematodes Management Planning Programs

Authors: Shiva Mardani, Mehdi Nasr-Esfahani, Majid Olia, Hamid Molahosseini, Hamed Hassanzadeh Khankahdani

Abstract:

Plant-parasitic nematodes cause serious diseases on plants and effectively reduce food production in quality and quantity worldwide, with at least 17 nematode species in the three important and major genera, including Meloidogyne, Heterodera, and Pratylenchus. Root-knot nematodes (RKN), Meloidogyne spp. with the dominant species, Meloidogynejavanica, are considered as the important plant pathogens of agricultural products globally. The hosts range can be vegetables, bedding plants, grasses, shrubs, numerous weeds, and trees, including forests. In this study, chemical management was carried out on RKN, M. javanica, to investigate the efficacy of Iranian Abamectin insecticide product [acaricide Abamectin (Vermectin® 2% EC, Gyah Corp., Iran)] verses imported normal Abamectin available in the Iran markets [acaricide Abamectin (Vermectin® 1.8% EC, Cropstar Chemical Industry Co., Ltd.)] each of which at the rate of 8 L./ha, on Tomatoes, Solanumlycopersicum L., (No. 29-41, Dutch company Siemens) as a test plant, and the controls (infested to RKN and without any chemical pesticides treatments); and (sterile soil without any RKN and chemical pesticides treatments) at the greenhouse in Isfahan, Iran. The trails were repeated thrice. The results indicated a highly significant reduction in RKN population and an increase in biomass parameters at 1% level of significance, respectively. Relatively similar results were obtained in all the three experiments conducted on tomato root-knot nematodes. The treatments of Gyah-Abamectin (51.6%) and external Abamectin (40.4%) had the highest to least effect on reducing the number of larvae in the soil compared to the infected controls, respectively. Gyah-Abamectin by 44.1% and then external one by 31.9% had the highest effect on reducing the number of larvae and eggs in the root and 31.4% and 24.1% reduction in the number of galls compared to the infected controls, respectively. Based on priority, Gyah-Abamectin (47.4 % ) and external Abamectin (31.1 %) treatments had the highest effect on reducing the number of egg- masses in the root compared to the infected controls, with no significant difference between Gyah-Abamectin and external Abamectin. The highest reproduction of larvae and egg in the root was observed in the infected controls (75.5%) and the lowest in the healthy controls (0.0%). The highest reduction in the larval and egg reproduction in the roots compared to the infected controls was observed in Gyah-Abamectin and the lowest in the external one. Based on preference, Gyah-Abamectin (37.6%) and external Abamectin (26.9%) had the highest effect on the reduction of the larvae and egg reproduction in the root compared to the infected controls, respectively. Regarding growth parameters factors, the lowest stem length was observed in external Abamectin (51.9 cm), with nosignificantly different from Gyah-Abamectin and healthy controls. The highest root fresh weight was recorded in the infected controls (19.81 gr.) and the lowest in the healthy ones (9.81 gr.); the highest root length in the healthy controls (22.4 cm), and the lowest in the infected controls and external Abamectin (12.6 and 11.9 cm), respectively. Conclusively, the results of these three tests on tomato plants revealed that Gyah-Abamectin 2% compared to external Abamectin 1.8% is competitive in the chemical management of the root nematodes of these types of products and is a suitable alternative in this regard.

Keywords: solanum lycopersicum, vermectin, biomass, tomato

Procedia PDF Downloads 93
868 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity

Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido

Abstract:

Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.

Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens

Procedia PDF Downloads 287
867 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 109
866 Impact of Insect-Feeding and Fire-Heating Wounding on Wood Properties of Lodgepole Pine

Authors: Estelle Arbellay, Lori D. Daniels, Shawn D. Mansfield, Alice S. Chang

Abstract:

Mountain pine beetle (MPB) outbreaks are currently devastating lodgepole pine forests in western North America, which are also widely disturbed by frequent wildfires. Both MPB and fire can leave scars on lodgepole pine trees, thereby diminishing their commercial value and possibly compromising their utilization in solid wood products. In order to fully exploit the affected resource, it is crucial to understand how wounding from these two disturbance agents impact wood properties. Moreover, previous research on lodgepole pine has focused solely on sound wood and stained wood resulting from the MPB-transmitted blue fungi. By means of a quantitative multi-proxy approach, we tested the hypotheses that (i) wounding (of either MPB or fire origin) caused significant changes in wood properties of lodgepole pine and that (ii) MPB-induced wound effects could differ from those induced by fire in type and magnitude. Pith-to-bark strips were extracted from 30 MPB scars and 30 fire scars. Strips were cut immediately adjacent to the wound margin and encompassed 12 rings from normal wood formed prior to wounding and 12 rings from wound wood formed after wounding. Wood properties evaluated within this 24-year window included ring width, relative wood density, cellulose crystallinity, fibre dimensions, and carbon and nitrogen concentrations. Methods used to measure these proxies at a (sub-)annual resolution included X-ray densitometry, X-ray diffraction, fibre quality analysis, and elemental analysis. Results showed a substantial growth release in wound wood compared to normal wood, as both earlywood and latewood width increased over a decade following wounding. Wound wood was also shown to have a significantly different latewood density than normal wood 4 years after wounding. Latewood density decreased in MPB scars while the opposite was true in fire scars. By contrast, earlywood density was presented only minor variations following wounding. Cellulose crystallinity decreased in wound wood compared to normal wood, being especially diminished in MPB scars the first year after wounding. Fibre dimensions also decreased following wounding. However, carbon and nitrogen concentrations did not substantially differ between wound wood and normal wood. Nevertheless, insect-feeding and fire-heating wounding were shown to significantly alter most wood properties of lodgepole pine, as demonstrated by the existence of several morphological anomalies in wound wood. MPB and fire generally elicited similar anomalies, with the major exception of latewood density. In addition to providing quantitative criteria for differentiating between biotic (MPB) and abiotic (fire) disturbances, this study provides the wood industry with fundamental information on the physiological response of lodgepole pine to wounding in order to evaluate the utilization of scarred trees in solid wood products.

Keywords: elemental analysis, fibre quality analysis, lodgepole pine, wood properties, wounding, X-ray densitometry, X-ray diffraction

Procedia PDF Downloads 319
865 An Evaluation of a First Year Introductory Statistics Course at a University in Jamaica

Authors: Ayesha M. Facey

Abstract:

The evaluation sought to determine the factors associated with the high failure rate among students taking a first-year introductory statistics course. By utilizing Tyler’s Objective Based Model, the main objectives were: to assess the effectiveness of the lecturer’s teaching strategies; to determine the proportion of students who attends lectures and tutorials frequently and to determine the impact of infrequent attendance on performance; to determine how the assigned activities assisted in students understanding of the course content; to ascertain the possible issues being faced by students in understanding the course material and obtain possible solutions to the challenges and to determine whether the learning outcomes have been achieved based on an assessment of the second in-course examination. A quantitative survey research strategy was employed and the study population was students enrolled in semester one of the academic year 2015/2016. A convenience sampling approach was employed resulting in a sample of 98 students. Primary data was collected using self-administered questionnaires over a one-week period. Secondary data was obtained from the results of the second in-course examination. Data were entered and analyzed in SPSS version 22 and both univariate and bivariate analyses were conducted on the information obtained from the questionnaires. Univariate analyses provided description of the sample through means, standard deviations and percentages while bivariate analyses were done using Spearman’s Rho correlation coefficient and Chi-square analyses. For secondary data, an item analysis was performed to obtain the reliability of the examination questions, difficulty index and discriminant index. The examination results also provided information on the weak areas of the students and highlighted the learning outcomes that were not achieved. Findings revealed that students were more likely to participate in lectures than tutorials and that attendance was high for both lectures and tutorials. There was a significant relationship between participation in lectures and performance on examination. However, a high proportion of students has been absent from three or more tutorials as well as lectures. A higher proportion of students indicated that they completed the assignments obtained from the lectures sometimes while they rarely completed tutorial worksheets. Students who were more likely to complete their assignments were significantly more likely to perform well on their examination. Additionally, students faced a number of challenges in understanding the course content and the topics of probability, binomial distribution and normal distribution were the most challenging. The item analysis also highlighted these topics as problem areas. Problems doing mathematics and application and analyses were their major challenges faced by students and most students indicated that some of the challenges could be alleviated if additional examples were worked in lectures and they were given more time to solve questions. Analysis of the examination results showed that a number of learning outcomes were not achieved for a number of topics. Based on the findings recommendations were made that suggested adjustments to grade allocations, delivery of lectures and methods of assessment.

Keywords: evaluation, item analysis, Tyler’s objective based model, university statistics

Procedia PDF Downloads 189
864 Development of Three-Dimensional Bio-Reactor Using Magnetic Field Stimulation to Enhance PC12 Cell Axonal Extension

Authors: Eiji Nakamachi, Ryota Sakiyama, Koji Yamamoto, Yusuke Morita, Hidetoshi Sakamoto

Abstract:

The regeneration of injured central nerve network caused by the cerebrovascular accidents is difficult, because of poor regeneration capability of central nerve system composed of the brain and the spinal cord. Recently, new regeneration methods such as transplant of nerve cells and supply of nerve nutritional factor were proposed and examined. However, there still remain many problems with the canceration of engrafted cells and so on and it is strongly required to establish an efficacious treating method of a central nerve system. Blackman proposed the electromagnetic stimulation method to enhance the axonal nerve extension. In this study, we try to design and fabricate a new three-dimensional (3D) bio-reactor, which can load a uniform AC magnetic field stimulation on PC12 cells in the extracellular environment for enhancement of an axonal nerve extension and 3D nerve network generation. Simultaneously, we measure the morphology of PC12 cell bodies, axons, and dendrites by the multiphoton excitation fluorescence microscope (MPM) and evaluate the effectiveness of the uniform AC magnetic stimulation to enhance the axonal nerve extension. Firstly, we designed and fabricated the uniform AC magnetic field stimulation bio-reactor. For the AC magnetic stimulation system, we used the laminated silicon steel sheets for a yoke structure of 3D chamber, which had a high magnetic permeability. Next, we adopted the pole piece structure and installed similar specification coils on both sides of the yoke. We searched an optimum pole piece structure using the magnetic field finite element (FE) analyses and the response surface methodology. We confirmed that the optimum 3D chamber structure showed a uniform magnetic flux density in the PC12 cell culture area by using FE analysis. Then, we fabricated the uniform AC magnetic field stimulation bio-reactor by adopting analytically determined specifications, such as the size of chamber and electromagnetic conditions. We confirmed that measurement results of magnetic field in the chamber showed a good agreement with FE results. Secondly, we fabricated a dish, which set inside the uniform AC magnetic field stimulation of bio-reactor. PC12 cells were disseminated with collagen gel and could be 3D cultured in the dish. The collagen gel were poured in the dish. The collagen gel, which had a disk shape of 6 mm diameter and 3mm height, was set on the membrane filter, which was located at 4 mm height from the bottom of dish. The disk was full filled with the culture medium inside the dish. Finally, we evaluated the effectiveness of the uniform AC magnetic field stimulation to enhance the nurve axonal extension. We confirmed that a 6.8 increase in the average axonal extension length of PC12 under the uniform AC magnetic field stimulation at 7 days culture in our bio-reactor, and a 24.7 increase in the maximum axonal extension length. Further, we confirmed that a 60 increase in the number of dendrites of PC12 under the uniform AC magnetic field stimulation. Finally, we confirm the availability of our uniform AC magnetic stimulation bio-reactor for the nerve axonal extension and the nerve network generation.

Keywords: nerve regeneration, axonal extension , PC12 cell, magnetic field, three-dimensional bio-reactor

Procedia PDF Downloads 166
863 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis

Authors: Bangkit Adhi Nugraha, Bambang Suripto

Abstract:

On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.

Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016

Procedia PDF Downloads 376
862 Decorative Plant Motifs in Traditional Art and Craft Practices: Pedagogical Perspectives

Authors: Geetanjali Sachdev

Abstract:

This paper explores the decorative uses of plant motifs and symbols in traditional Indian art and craft practices in order to assess their pedagogical significance within the context of plant study in higher education in art and design. It examines existing scholarship on decoration and plants in Indian art and craft practices. The impulse to elaborate upon an existing form or surface is an intrinsic part of many Indian traditional art and craft traditions where a deeply ingrained love for decoration exists. Indian craftsmen use an array of motifs and embellishments to adorn surfaces across a range of practices, and decoration is widely seen in textiles, jewellery, temple sculptures, vehicular art, architecture, and various other art, craft, and design traditions. Ornamentation in Indian cultural traditions has been attributed to religious and spiritual influences in the lives of India’s art and craft practitioners. Through adornment, surfaces and objects were ritually transformed to function both spiritually and physically. Decorative formations facilitate spiritual development and attune our minds to concepts that support contemplation. Within practices of ornamentation and adornment, there is extensive use of botanical motifs as Indian art and craft practitioners have historically been drawn towards nature as a source of inspiration. This is due to the centrality of agriculture in the lives of Indian people as well as in religion, where plants play a key role in religious rituals and festivals. Plant representations thus abound in two-dimensional and three-dimensional surface designs and patterns where the motifs range from being realistic, highly stylized, and curvilinear forms to geometric and abstract symbols. Existing scholarship reveals that these botanical embellishments reference a wide range of plants that include native and non-indigenous plants, as well as imaginary and mythical plants. Structural components of plant anatomy, such as leaves, stems, branches and buds, and flowers, are part of the repertoire of design motifs used, as are plant forms indicating different stages of growth, such as flowering buds and flowers in full bloom. Symmetry is a characteristic feature, and within the decorative register of various practices, plants are part of border zones and bands, connecting corners and all-over patterns, used as singular motifs and floral sprays on panels, and as elements within ornamental scenes. The results of the research indicate that decoration as a mode of inquiry into plants can serve as a platform to learn about local and global biodiversity and plant anatomy and develop artistic modes of thinking symbolically, metaphorically, imaginatively, and relationally about the plant world. The conclusion is drawn that engaging with ornamental modes of plant representation in traditional Indian art and craft practices is pedagogically significant for two reasons. Decoration as a mode of engagement cultivates both botanical and artistic understandings of plants. It also links learners with the indigenous art and craft traditions of their own culture.

Keywords: art and design pedagogy, decoration, plant motifs, traditional art and craft

Procedia PDF Downloads 84
861 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking

Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz

Abstract:

Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.

Keywords: anatomy, e-learning, virtual reality, 3D model marking

Procedia PDF Downloads 99
860 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 336
859 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change

Authors: Damian Islas

Abstract:

Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.

Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism

Procedia PDF Downloads 117
858 Meeting the Health Needs of Adolescents and Young Adults: Developing and Evaluating an Electronic Questionnaire and Health Report Form, for the Health Assessment at Youth Health Clinics – A Mixed Methods Project

Authors: P.V. Lostelius, M.Mattebo, E. Thors Adolfsson, A. Söderlund, Å. Revenäs

Abstract:

Adolescents are vulnerable in healthcare settings. Early detection of poor health in young people is important to support a good quality of life and adult social functioning. Youth Health Clinics (YHCs) in Sweden provide healthcare for young people ages 13-25 years old. Using an overall mixed methods approach, the project’s main objective was to develop and evaluate an electronic health system, including a health questionnaire, a case report form, and an evaluation questionnaire to assess young people’s health risks in early stages, increase health, and quality of life. In total, 72 young people, 16-23 years old, eleven healthcare professionals and eight researchers participated in the three project studies. Results from interviews with fifteen young people gave that an electronic health questionnaire should include questions about physical-, mental-, sexual health and social support. It should specifically include questions about self-harm and suicide risk. The young people said that the questionnaire should be appealing, based on young people’s needs and be user-friendly. It was important that young people felt safe when responding to the questions, both physically and electronically. Also, they found that it had the potential to support the face-to face-meeting between young people and healthcare professionals. The electronic health report system was developed by the researchers, performing a structured development of the electronic health questionnaire, construction of a case report form to present the results from the health questions, along with an electronic evaluation questionnaire. An Information Technology company finalized the development by digitalizing the electronic health system. Four young people, three healthcare professionals and seven researchers evaluated the usability using interviews and a usability questionnaire. The electronic health questionnaire was found usable for YHCs but needed some clarifications. Essentially, the system succeeded in capturing the overall health of young people; it should be able to keep the interest of young people and have the potential to contribute to health assessment planning and young people’s self-reflection, sharing vulnerable feelings with healthcare professionals. In advance of effect studies, a feasibility study was performed by collecting electronic questionnaire data from 54 young people and interview data from eight healthcare professionals to assess the feasibility of the use of the electronic evaluation questionnaire, the case report form, and the planned recruitment method. When merging the results, the research group found that the evaluation questionnaire and the health report were feasible for future research. However, the COVID-19 pandemic, commitment challenges and drop-outs affected the recruitment of young people. Also, some healthcare professionals felt insecure about using computers and electronic devices and worried that their workload would increase. This project contributes knowledge about the development and use of electronic health tools for young people. Before implementation, clinical routines need for using the health report system need to be considered.

Keywords: adolescent health, developmental studies, electronic health questionnaire, mixed methods research

Procedia PDF Downloads 106
857 The Professionalization of Teachers in the Context of the Development of a Future-Oriented Technical and Vocational Education and Training System in Egypt

Authors: Sherin Ahmed El-Badry Sadek

Abstract:

In this research, it is scientifically examined what contribution the professionalization of teachers can make to the development of a future-oriented vocational education and training system in Egypt. For this purpose, a needs assessment of the Egyptian vocational training system with the central actors and prevailing structures forms the foundation of the study, which theoretically underpinned with the attempt to resolve to some extent the tension between Luhmann's systems theory approach and the actor-centered theory of professional teacher competence. The vocational education system, in particular, must be adaptable and flexible due to the rapidly changing qualification requirements. In view of the pace of technological progress and the associated market changes, vocational training is no longer to be understood only as an educational tool aimed at those who achieve poorer academic performance or are not motivated to take up a degree. Rather, it is to be understood as a cornerstone for the development of society, and international experience shows that it is the core of lifelong learning. But to what extent have the education systems been able to react to these changes in their political, social, and technological systems? And how effective and sustainable are these changes actually? The vocational training system, in particular, has a particular impact on other social systems, which is why the appropriate parameters with the greatest leverage must be identified and adapted. Even if systems and structures are highly relevant, teachers must not hide behind them and must instead strive to develop further and to constantly learn. Despite numerous initiatives and programs to reform vocational training in Egypt, including the EU-funded Technical and Vocational Education and Training (TVET) reform phase I and phase II, the fit of the skilled workers to the needs of the labor market is still insufficient. Surveys show that the majority of employers are very dissatisfied with the graduates that the vocational training system produces. The data was collected through guideline-based interviews with experts from the education system and relevant neighboring systems, which allowed me to reconstruct central in-depth structures, as well as patterns of action and interpretation, in order to subsequently feed these into a matrix of recommendations for action. These recommendations are addressed to different decision-makers and stakeholders and are intended to serve as an impetus for the sustainable improvement of the Egyptian vocational training system. The research findings have shown that education, and in particular vocational training, is a political field that is characterized by a high degree of complexity and which is embedded in a barely manageable, highly branched landscape of structures and actors. At the same time, the vocational training system is not only determined by endogenous factors but also increasingly shaped by the dynamics of the environment and the neighboring social subsystems, with a mutual dependency relationship becoming apparent. These interactions must be taken into account in all decisions, even if prioritization of measures and thus a clear sequence and process orientation are of great urgency.

Keywords: competence orientation, educational policies, education systems, expert interviews, globalization, organizational development, professionalization, systems theory, teacher training, TVET system, vocational training

Procedia PDF Downloads 150
856 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?

Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire

Abstract:

The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.

Keywords: basel 3, fair value, securitization, long term investment, banks, insurers

Procedia PDF Downloads 289
855 Artificial Cells Capable of Communication by Using Polymer Hydrogel

Authors: Qi Liu, Jiqin Yao, Xiaohu Zhou, Bo Zheng

Abstract:

The first artificial cell was produced by Thomas Chang in the 1950s when he was trying to make a mimic of red blood cells. Since then, many different types of artificial cells have been constructed from one of the two approaches: a so-called bottom-up approach, which aims to create a cell from scratch, and a top-down approach, in which genes are sequentially knocked out from organisms until only the minimal genome required for sustaining life remains. In this project, bottom-up approach was used to build a new cell-free expression system which mimics artificial cell that capable of protein expression and communicate with each other. The artificial cells constructed from the bottom-up approach are usually lipid vesicles, polymersomes, hydrogels or aqueous droplets containing the nucleic acids and transcription-translation machinery. However, lipid vesicles based artificial cells capable of communication present several issues in the cell communication research: (1) The lipid vesicles normally lose the important functions such as protein expression within a few hours. (2) The lipid membrane allows the permeation of only small molecules and limits the types of molecules that can be sensed and released to the surrounding environment for chemical communication; (3) The lipid vesicles are prone to rupture due to the imbalance of the osmotic pressure. To address these issues, the hydrogel-based artificial cells were constructed in this work. To construct the artificial cell, polyacrylamide hydrogel was functionalized with Acrylate PEG Succinimidyl Carboxymethyl Ester (ACLT-PEG2000-SCM) moiety on the polymer backbone. The proteinaceous factors can then be immobilized on the polymer backbone by the reaction between primary amines of proteins and N-hydroxysuccinimide esters (NHS esters) of ACLT-PEG2000-SCM, the plasmid template and ribosome were encapsulated inside the hydrogel particles. Because the artificial cell could continuously express protein with the supply of nutrients and energy, the artificial cell-artificial cell communication and artificial cell-natural cell communication could be achieved by combining the artificial cell vector with designed plasmids. The plasmids were designed referring to the quorum sensing (QS) system of bacteria, which largely relied on cognate acyl-homoserine lactone (AHL) / transcription pairs. In one communication pair, “sender” is the artificial cell or natural cell that can produce AHL signal molecule by synthesizing the corresponding signal synthase that catalyzed the conversion of S-adenosyl-L-methionine (SAM) into AHL, while the “receiver” is the artificial cell or natural cell that can sense the quorum sensing signaling molecule form “sender” and in turn express the gene of interest. In the experiment, GFP was first immobilized inside the hydrogel particle to prove that the functionalized hydrogel particles could be used for protein binding. After that, the successful communication between artificial cell-artificial cell and artificial cell-natural cell was demonstrated, the successful signal between artificial cell-artificial cell or artificial cell-natural cell could be observed by recording the fluorescence signal increase. The hydrogel-based artificial cell designed in this work can help to study the complex communication system in bacteria, it can also be further developed for therapeutic applications.

Keywords: artificial cell, cell-free system, gene circuit, synthetic biology

Procedia PDF Downloads 150
854 The Artificial Intelligence Driven Social Work

Authors: Avi Shrivastava

Abstract:

Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.

Keywords: social work, artificial intelligence, AI based social work, machine learning, technology

Procedia PDF Downloads 101
853 Influence of Cryo-Grinding on Particle Size Distribution of Proso Millet Bran Fraction

Authors: Maja Benkovic, Dubravka Novotni, Bojana Voucko, Duska Curic, Damir Jezek, Nikolina Cukelj

Abstract:

Cryo-grinding is an ultra-fine grinding method used in the pharmaceutical industry, production of herbs and spices and in the production and handling of cereals, due to its ability to produce powders with small particle sizes which maintain their favorable bioactive profile. The aim of this study was to determine the particle size distributions of the proso millet (Panicum miliaceum) bran fraction grinded at cryogenic temperature (using liquid nitrogen (LN₂) cooling, T = - 196 °C), in comparison to non-cooled grinding. Proso millet bran is primarily used as an animal feed, but has a potential in food applications, either as a substrate for extraction of bioactive compounds or raw material in the bakery industry. For both applications finer particle sizes of the bran could be beneficial. Thus, millet bran was ground for 2, 4, 8 and 12 minutes using the ball mill (CryoMill, Retsch GmbH, Haan, Germany) at three grinding modes: (I) without cooling, (II) at cryo-temperature, and (III) at cryo-temperature with included 1 minute of intermediate cryo-cooling step after every 2 minutes of grinding, which is usually applied when samples require longer grinding times. The sample was placed in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm). The oscillation frequency in all three modes was 30 Hz. Particle size distributions of the bran were determined by a laser diffraction particle sizing method (Mastersizer 2000) using the Scirocco 2000 dry dispersion unit (Malvern Instruments, Malvern, UK). Three main effects of the grinding set-up were visible from the results. Firstly, grinding time at all three modes had a significant effect on all particle size parameters: d(0.1), d(0.5), d(0.9), D[3,2], D[4,3], span and specific surface area. Longer grinding times resulted in lower values of the above-listed parameters, e.g. the averaged d(0.5) of the sample (229.57±1.46 µm) dropped to 51.29±1.28 µm after 2 minutes grinding without LN₂, and additionally to 43.00±1.33 µm after 4 minutes of grinding without LN₂. The only exception was the sample ground for 12 minutes without cooling, where an increase in particle diameters occurred (d(0.5)=62.85±2.20 µm), probably due to particles adhering to one another and forming larger particle clusters. Secondly, samples with LN₂ cooling exhibited lower diameters in comparison to non-cooled. For example, after 8 minutes of non-cooled grinding d(0.5)=46.97±1.05 µm was achieved, while the LN₂ cooling enabled collection of particles with average sizes of d(0.5)=18.57±0.18 µm. Thirdly, the application of intermediate cryo-cooling step resulted in similar particle diameters (d(0.5)=15.83±0.36 µm, 12 min of grinding) as cryo-milling without this step (d(0.5)=16.33±2.09 µm, 12 min of grinding). This indicates that intermediate cooling is not necessary for the current application, which consequently reduces the consumption of LN₂. These results point out the potential beneficial effects of millet bran grinding at cryo-temperatures. Further research will show if the lower particle size achieved in comparison to non-cooled grinding could result in increased bioavailability of bioactive compounds, as well as protein digestibility and solubility of dietary fibers of the proso millet bran fraction.

Keywords: ball mill, cryo-milling, particle size distribution, proso millet (Panicum miliaceum) bran

Procedia PDF Downloads 144
852 Expressing Locality in Learning English: A Study of English Textbooks for Junior High School Year VII-IX in Indonesia Context

Authors: Agnes Siwi Purwaning Tyas, Dewi Cahya Ambarwati

Abstract:

This paper concerns the language learning that develops as a habit formation and a constructive process while exercising an oppressive power to construct the learners. As a locus of discussion, the investigation problematizes the transfer of English language to Indonesian students of junior high school through the use of English textbooks ‘Real Time: An Interactive English Course for Junior High School Students Year VII-IX’. English language has long performed as a global language and it is a demand upon the non-English native speakers to master the language if they desire to become internationally recognized individuals. Generally, English teachers teach the language in accordance with the nature of language learning in which they are trained and expected to teach the language within the culture of the target language. This provides a potential soft cultural penetration of a foreign ideology through language transmission. In the context of Indonesia, learning English as international language is considered dilemmatic. Most English textbooks in Indonesia incorporate cultural elements of the target language which in some extent may challenge the sensitivity towards local cultural values. On the other hand, local teachers demand more English textbooks for junior high school students which can facilitate cultural dissemination of both local and global values and promote learners’ cultural traits of both cultures to avoid misunderstanding and confusion. It also aims to support language learning as bidirectional process instead of instrument of oppression. However, sensitizing and localizing this foreign language is not sufficient to restrain its soft infiltration. In due course, domination persists making the English language as an authoritative language and positioning the locality as ‘the other’. Such critical premise has led to a discursive analysis referring to how the cultural elements of the target language are presented in the textbooks and whether the local characteristics of Indonesia are able to gradually reduce the degree of the foreign oppressive ideology. The three textbooks researched were written by non-Indonesian author edited by two Indonesia editors published by a local commercial publishing company, PT Erlangga. The analytical elaboration examines the cultural characteristics in the forms of names, terminologies, places, objects and imageries –not the linguistic aspect– of both cultural domains; English and Indonesia. Comparisons as well as categorizations were made to identify the cultural traits of each language and scrutinize the contextual analysis. In the analysis, 128 foreign elements and 27 local elements were found in textbook for grade VII, 132 foreign elements and 23 local elements were found in textbook for grade VIII, while 144 foreign elements and 35 local elements were found in grade IX textbook, demonstrating the unequal distribution of both cultures. Even though the ideal pedagogical approach of English learning moves to a different direction by the means of inserting local elements, the learners are continuously imposed to the culture of the target language and forced to internalize the concept of values under the influence of the target language which tend to marginalize their native culture.

Keywords: bidirectional process, English, local culture, oppression

Procedia PDF Downloads 265
851 Accuracy of Fitbit Charge 4 for Measuring Heart Rate in Parkinson’s Patients During Intense Exercise

Authors: Giulia Colonna, Jocelyn Hoye, Bart de Laat, Gelsina Stanley, Jose Key, Alaaddin Ibrahimy, Sule Tinaz, Evan D. Morris

Abstract:

Parkinson’s disease (PD) is the second most common neurodegenerative disease and affects approximately 1% of the world’s population. Increasing evidence suggests that aerobic physical exercise can be beneficial in mitigating both motor and non-motor symptoms of the disease. In a recent pilot study of the role of exercise on PD, we sought to confirm exercise intensity by monitoring heart rate (HR). For this purpose, we asked participants to wear a chest strap heart rate monitor (Polar Electro Oy, Kempele). The device sometimes proved uncomfortable. Looking forward to larger clinical trials, it would be convenient to employ a more comfortable and user friendly device. The Fitbit Charge 4 (Fitbit Inc) is a potentially comfortable, user-friendly solution since it is a wrist-worn heart rate monitor. Polar H10 has been used in large trials, and for our purposes, we treated it as the gold standard for the beat-to-beat period (R-R interval) assessment. In previous literature, it has been shown that Fitbit Charge 4 has comparable accuracy to Polar H10 in healthy subjects. It has yet to be determined if the Fitbit is as accurate as the Polar H10 in subjects with PD or in clinical populations, generally. Goal: To compare the Fitbit Charge 4 to the Polar H10 for monitoring HR in PD subjects engaging in an intensive exercise program. Methods: A total of 596 exercise sessions from 11 subjects (6 males) were collected simultaneously by both devices. Subjects with early-stage PD (Hoehn & Yahr <=2) were enrolled in a 6 months exercise training program designed for PD patients. Subjects participated in 3 one-hour exercise sessions per week. They wore both Fitbit and Polar H10 during each session. Sessions included rest, warm-up, intensive exercise, and cool-down periods. We calculated the bias in the HR via Fitbit under rest (5min) and intensive exercise (20min) by comparing the mean HR during each of the periods to the respective means measured by the Polar (HRFitbit – HRPolar). We also measured the sensitivity and specificity of Fitbit for detecting HRs that exceed the threshold for intensive exercise, defined as 70% of an individual’s theoretical maximum HR. Different types of correlation between the two devices were investigated. Results: The mean bias was 1.68 bpm at rest and 6.29 bpm during high intensity exercise, with an overestimation by Fitbit in both conditions. The mean bias of Fitbit across both rest and intensive exercise periods was 3.98 bpm. The sensitivity of the device in identifying high intensity exercise sessions was 97.14 %. The correlation between the two devices was non-linear, suggesting a saturation tendency of Fitbit to saturate at high values of HR. Conclusion: The performance of Fitbit Charge 4 is comparable to Polar H10 for assessing exercise intensity in a cohort of PD subjects. The device should be considered a reasonable replacement for the more cumbersome chest strap technology in future similar studies of clinical populations.

Keywords: fitbit, heart rate measurements, parkinson’s disease, wrist-wearable devices

Procedia PDF Downloads 106
850 Business Strategy, Crisis and Digitalization

Authors: Flora Xu, Marta Fernandez Olmos

Abstract:

This article is mainly about critical assessment and comprehensive understanding of the business strategy in the post COVID-19 scenario. This study aims to elucidate how companies are responding to the unique challenges posed by the pandemic and how these measures are shaping the future of the business environment. The pandemic has exposed the fragility and flexibility of the global supply chain, and procurement and production strategies should be reconsidered. It should increase the diversity of suppliers and the flexibility of the supply chain, and some companies are considering transferring their survival to the local market. This can increase local employment and reduce international transportation disruptions and customs issues. By shortening the distance between production and market, companies can respond more quickly to changes in demand and unforeseen events. The demand for remote work and online solutions will increase the adoption of digital technology and accelerate the digital transformation of many organizations. Marketing and communication strategies need to adapt to a constantly changing environment. The business resilience strategy was emphasized as a key component of the response to the COVID-19. The company is seeking to strengthen its risk management capabilities and develop a business continuity plan to cope with future unexpected disruptions. The pandemic has reconfigured human resource practices and changed the way companies manage their employees. Remote work has become the norm, and companies focus on managing workers' health and well-being, as well as flexible work policies to ensure operations and support for employees during crises. This change in human resources practice has a lasting impact on how companies apply talent and labor management in the post COVID-19 world. The pandemic has prompted a significant review of business strategies as companies adapt to constantly changing environments and seek to ensure their sustainability and profitability in times of crisis. This strategic reassessment has led to product diversification, exploring international markets and adapting to the changing market. Companies have responded to the unprecedented challenges brought by the COVID-19. The COVID-19 has promoted innovation effort in key areas and focused on the responsibility in today's business strategy for sustainability and the importance of corporate society. The important challenge of formulating and implementing business strategies in uncertain times. These challenges include making quick and agile decisions in turbulent environments, risk management, and adaptability to constantly changing market conditions. The COVID-19 highlights the importance of strategic planning and informed decision-making - making in a business environment characterized by uncertainty and complexity. In short, the pandemic has reconfigured the way companies handle business strategies and emphasized the necessity of preparing for future challenges in a business world marked by uncertainty and complexity.

Keywords: business strategy, crisis, digitalization, uncertainty

Procedia PDF Downloads 17
849 Academic Knowledge Transfer Units in the Western Balkans: Building Service Capacity and Shaping the Business Model

Authors: Andrea Bikfalvi, Josep Llach, Ferran Lazaro, Bojan Jovanovski

Abstract:

Due to the continuous need to foster university-business cooperation in both developed and developing countries, some higher education institutions face the challenge of designing, piloting, operating, and consolidating knowledge and technology transfer units. University-business cooperation has different maturity stages worldwide, with some higher education institutions excelling in these practices, but with lots of others that could be qualified as intermediate, or even some situated at the very beginning of their knowledge transfer adventure. These latter face the imminent necessity to formally create the technology transfer unit and to draw its roadmap. The complexity of this operation is due to various aspects that need to align and coordinate, including a major change in mission, vision, structure, priorities, and operations. Qualitative in approach, this study presents 5 case studies, consisting of higher education institutions located in the Western Balkans – 2 in Albania, 2 in Bosnia and Herzegovina, 1 in Montenegro- fully immersed in the entrepreneurial journey of creating their knowledge and technology transfer unit. The empirical evidence is developed in a pan-European project, illustratively called KnowHub (reconnecting universities and enterprises to unleash regional innovation and entrepreneurial activity), which is being implemented in three countries and has resulted in at least 15 pilot cooperation agreements between academia and business. Based on a peer-mentoring approach including more experimented and more mature technology transfer models of European partners located in Spain, Finland, and Austria, a series of initial lessons learned are already available. The findings show that each unit developed its tailor-made approach to engage with internal and external stakeholders, offer value to the academic staff, students, as well as business partners. The latest technology underpinning KnowHub services and institutional commitment are found to be key success factors. Although specific strategies and plans differ, they are based on a general strategy jointly developed and based on common tools and methods of strategic planning and business modelling. The main output consists of providing good practice for designing, piloting, and initial operations of units aiming to fully valorise knowledge and expertise available in academia. Policymakers can also find valuable hints on key aspects considered vital for initial operations. The value of this contribution is its focus on the intersection of three perspectives (service orientation, organisational innovation, business model) since previous research has only relied on a single topic or dual approaches, most frequently in the business context and less frequently in higher education.

Keywords: business model, capacity building, entrepreneurial education, knowledge transfer

Procedia PDF Downloads 139
848 Analytical, Numerical, and Experimental Research Approaches to Influence of Vibrations on Hydroelastic Processes in Centrifugal Pumps

Authors: Dinara F. Gaynutdinova, Vladimir Ya Modorsky, Nikolay A. Shevelev

Abstract:

The problem under research is that of unpredictable modes occurring in two-stage centrifugal hydraulic pump as a result of hydraulic processes caused by vibrations of structural components. Numerical, analytical and experimental approaches are considered. A hypothesis was developed that the problem of unpredictable pressure decrease at the second stage of centrifugal pumps is caused by cavitation effects occurring upon vibration. The problem has been studied experimentally and theoretically as of today. The theoretical study was conducted numerically and analytically. Hydroelastic processes in dynamic “liquid – deformed structure” system were numerically modelled and analysed. Using ANSYS CFX program engineering analysis complex and computing capacity of a supercomputer the cavitation parameters were established to depend on vibration parameters. An influence domain of amplitudes and vibration frequencies on concentration of cavitation bubbles was formulated. The obtained numerical solution was verified using CFM program package developed in PNRPU. The package is based on a differential equation system in hyperbolic and elliptic partial derivatives. The system is solved by using one of finite-difference method options – the particle-in-cell method. The method defines the problem solution algorithm. The obtained numerical solution was verified analytically by model problem calculations with the use of known analytical solutions of in-pipe piston movement and cantilever rod end face impact. An infrastructure consisting of an experimental fast hydro-dynamic processes research installation and a supercomputer connected by a high-speed network, was created to verify the obtained numerical solutions. Physical experiments included measurement, record, processing and analysis of data for fast processes research by using National Instrument signals measurement system and Lab View software. The model chamber end face oscillated during physical experiments and, thus, loaded the hydraulic volume. The loading frequency varied from 0 to 5 kHz. The length of the operating chamber varied from 0.4 to 1.0 m. Additional loads weighed from 2 to 10 kg. The liquid column varied from 0.4 to 1 m high. Liquid pressure history was registered. The experiment showed dependence of forced system oscillation amplitude on loading frequency at various values: operating chamber geometrical dimensions, liquid column height and structure weight. Maximum pressure oscillation (in the basic variant) amplitudes were discovered at loading frequencies of approximately 1,5 kHz. These results match the analytical and numerical solutions in ANSYS and CFM.

Keywords: computing experiment, hydroelasticity, physical experiment, vibration

Procedia PDF Downloads 243
847 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States

Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala

Abstract:

Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.

Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work

Procedia PDF Downloads 171
846 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules

Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury

Abstract:

Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.

Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies

Procedia PDF Downloads 119
845 Swedish–Nigerian Extrusion Research: Channel for Traditional Grain Value Addition

Authors: Kalep Filli, Sophia Wassén, Annika Krona, Mats Stading

Abstract:

Food security challenge and the growing population in Sub-Saharan Africa centers on its agricultural transformation, where about 70% of its population is directly involved in farming. Research input can create economic opportunities, reduce malnutrition and poverty, and generate faster, fairer growth. Africa is discarding $4 billion worth of grain annually due to pre and post-harvest losses. Grains and tubers play a central role in food supply in the region but their production has generally lagged behind because no robust scientific input to meet up with the challenge. The African grains are still chronically underutilized to the detriment of the well-being of the people of Africa and elsewhere. The major reason for their underutilization is because they are under-researched. Any commitment by scientific community to intervene needs creative solutions focused on innovative approaches that will meet the economic growth. In order to mitigate this hurdle, co-creation activities and initiatives are necessary.An example of such initiatives has been initiated through Modibbo Adama University of Technology Yola, Nigeria and RISE (The Research Institutes of Sweden) Gothenburg, Sweden. Exchange of expertise in research activities as a possibility to create channel for value addition to agricultural commodities in the region under the ´Traditional Grain Network programme´ is in place. Process technologies, such as extrusion offers the possibility of creating products in the food and feed sectors, with better storage stability, added value, lower transportation cost and new markets. The Swedish–Nigerian initiative has focused on the development of high protein pasta. Dry microscopy of pasta sample result shows a continuous structural framework of proteins and starch matrix. The water absorption index (WAI) results showed that water was absorbed steadily and followed the master curve pattern. The WAI values ranged between 250 – 300%. In all aspect, the water absorption history was within a narrow range for all the eight samples. The total cooking time for all the eight samples in our study ranged between 5 – 6 minutes with their respective dry sample diameter ranging between 1.26 – 1.35 mm. The percentage water solubility index (WSI) ranged from 6.03 – 6.50% which was within a narrow range and the cooking loss which is a measure of WSI is considered as one of the main parameters taken into consideration during the assessment of pasta quality. The protein contents of the samples ranged between 17.33 – 18.60 %. The value of the cooked pasta firmness ranged from 0.28 - 0.86 N. The result shows that increase in ratio of cowpea flour and level of pregelatinized cowpea tends to increase the firmness of the pasta. The breaking strength represent index of toughness of the dry pasta ranged and it ranged from 12.9 - 16.5 MPa.

Keywords: cowpea, extrusion, gluten free, high protein, pasta, sorghum

Procedia PDF Downloads 193
844 China Pakistan Economic Corridor: An Unfolding Fiasco in World Economy

Authors: Debarpita Pande

Abstract:

On 22nd May 2013 Chinese Premier Li Keqiang on his visit to Pakistan tabled a proposal for connecting Kashgar in China’s Xinjiang Uygur Autonomous Region with the south-western Pakistani seaport of Gwadar via the China Pakistan Economic Corridor (hereinafter referred to as CPEC). The project, popularly termed as 'One Belt One Road' will encompass within it a connectivity component including a 3000-kilometre road, railways and oil pipeline from Kashgar to Gwadar port along with an international airport and a deep sea port. Superficially, this may look like a 'game changer' for Pakistan and other countries of South Asia but this article by doctrinal method of research will unearth some serious flaws in it, which may change the entire economic system of this region heavily affecting the socio-economic conditions of South Asia, further complicating the complete geopolitical situation of the region disturbing the world economic stability. The paper besets with a logical analyzation of the socio-economic issues arising out of this project with an emphasis on its impact on the Pakistani and Indian economy due to Chinese dominance, serious tension in international relations, security issues, arms race, political and provincial concerns. The research paper further aims to study the impact of huge burden of loan given by China towards this project where Pakistan already suffers from persistent debts in the face of declining foreign currency reserves along with that the sovereignty of Pakistan will also be at stake as the entire economy of the country will be held hostage by China. The author compares this situation with the fallout from projects in Sri Lanka, Tajikistan, and several countries of Africa, all of which are now facing huge debt risks brought by Chinese investments. The entire economic balance will be muddled by the increment in Pakistan’s demand of raw materials resulting to the import of the same from China, which will lead to exorbitant price-hike and limited availability. CPEC will also create Chinese dominance over the international movement of goods that will take place between the Atlantic and the Pacific oceans and hence jeopardising the entire economic balance of South Asia along with Middle Eastern countries like Dubai. Moreover, the paper also analyses the impact of CPEC in the context of international unrest and arms race between Pakistan and India as well as India and China due to border disputes and Chinese surveillance. The paper also examines the global change in economic dynamics in international trade that CPEC will create in the light of U.S.-China relationship. The article thus reflects the grave consequences of CPEC on the international economy, security and bilateral relations, which surpasses the positive impacts of it. The author lastly suggests for more transparency and proper diplomatic planning in the execution of this mega project, which can be a cause of economic complexity in international trade in near future.

Keywords: China, CPEC, international trade, Pakistan

Procedia PDF Downloads 173
843 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 310
842 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China

Authors: Jiujie Cai, Fengxia LI, Haibo Wang

Abstract:

After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.

Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment

Procedia PDF Downloads 125
841 Atypical Intoxication Due to Fluoxetine Abuse with Symptoms of Amnesia

Authors: Ayse Gul Bilen

Abstract:

Selective serotonin reuptake inhibitors (SSRIs) are commonly prescribed antidepressants that are used clinically for the treatment of anxiety disorders, obsessive-compulsive disorder (OCD), panic disorders and eating disorders. The first SSRI, fluoxetine (sold under the brand names Prozac and Sarafem among others), had an adverse effect profile better than any other available antidepressant when it was introduced because of its selectivity for serotonin receptors. They have been considered almost free of side effects and have become widely prescribed, however questions about the safety and tolerability of SSRIs have emerged with their continued use. Most SSRI side effects are dose-related and can be attributed to serotonergic effects such as nausea. Continuous use might trigger adverse effects such as hyponatremia, tremor, nausea, weight gain, sleep disturbance and sexual dysfunction. Moderate toxicity can be safely observed in the hospital for 24 hours, and mild cases can be safely discharged (if asymptomatic) from the emergency department once cleared by Psychiatry in cases of intentional overdose and after 6 to 8 hours of observation. Although fluoxetine is relatively safe in terms of overdose, it might still be cardiotoxic and inhibit platelet secretion, aggregation, and plug formation. There have been reported clinical cases of seizures, cardiac conduction abnormalities, and even fatalities associated with fluoxetine ingestions. While the medical literature strongly suggests that most fluoxetine overdoses are benign, emergency physicians need to remain cognizant that intentional, high-dose fluoxetine ingestions may induce seizures and can even be fatal due to cardiac arrhythmia. Our case is a 35-year old female patient who was sent to ER with symptoms of confusion, amnesia and loss of orientation for time and location after being found wandering in the streets unconsciously by police forces that informed 112. Upon laboratory examination, no pathological symptom was found except sinus tachycardia in the EKG and high levels of aspartate transaminase (AST) and alanine transaminase (ALT). Diffusion MRI and computed tomography (CT) of the brain all looked normal. Upon physical and sexual examination, no signs of abuse or trauma were found. Test results for narcotics, stimulants and alcohol were negative as well. There was a presence of dysrhythmia which required admission to the intensive care unit (ICU). The patient gained back her conscience after 24 hours. It was discovered from her story afterward that she had been using fluoxetine due to post-traumatic stress disorder (PTSD) for 6 months and that she had attempted suicide after taking 3 boxes of fluoxetine due to the loss of a parent. She was then transferred to the psychiatric clinic. Our study aims to highlight the need to consider toxicologic drug use, in particular, the abuse of selective serotonin reuptake inhibitors (SSRIs), which have been widely prescribed due to presumed safety and tolerability, for diagnosis of patients applying to the emergency room (ER).

Keywords: abuse, amnesia, fluoxetine, intoxication, SSRI

Procedia PDF Downloads 195
840 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 152