Search results for: international study tours
844 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method
Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili
Abstract:
The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method
Procedia PDF Downloads 197843 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 116842 Bioleaching of Precious Metals from an Oil-fired Ash Using Organic Acids Produced by Aspergillus niger in Shake Flasks and a Bioreactor
Authors: Payam Rasoulnia, Seyyed Mohammad Mousavi
Abstract:
Heavy fuel oil firing power plants produce huge amounts of ashes as solid wastes, which seriously need to be managed and processed. Recycling precious metals of V and Ni from these oil-fired ashes which are considered as secondary sources of metals recovery, not only has a great economic importance for use in industry, but also it is noteworthy from the environmental point of view. Vanadium is an important metal that is mainly used in the steel industry because of its physical properties of hardness, tensile strength, and fatigue resistance. It is also utilized in oxidation catalysts, titanium–aluminum alloys and vanadium redox batteries. In the present study bioleaching of vanadium and nickel from an oil-fired ash sample was conducted using Aspergillus niger fungus. The experiments were carried out using spent-medium bioleaching method in both Erlenmeyer flasks and also bubble column bioreactor, in order to compare them together. In spent-medium bioleaching the solid waste is not in direct contact with the fungus and consequently the fungal growth is not retarded and maximum organic acids are produced. In this method the metals are leached through biogenic produced organic acids present in the medium. In shake flask experiments the fungus was cultured for 15 days, where the maximum production of organic acids was observed, while in bubble column bioreactor experiments a 7 days fermentation period was applied. The amount of produced organic acids were measured using high performance liquid chromatography (HPLC) and the results showed that depending on the fermentation period and the scale of experiments, the fungus has different major lixiviants. In flask tests, citric acid was the main produced organic acid by the fungus and the other organic acids including gluconic, oxalic, and malic were excreted in much lower concentrations, while in the bioreactor oxalic acid was the main lixiviant and it was produced considerably. In Erlenmeyer flasks during 15 days fermentation of Aspergillus niger, 8080 ppm citric acid and 1170 ppm oxalic acid was produced, while in bubble column bioreactor over 7 days of fungal growth, 17185 ppm oxalic acid and 1040 ppm citric acid was secreted. The leaching tests using the spent-media obtained from both of fermentation experiments, were performed at the same conditions of leaching duration of 7 days, leaching temperature of 60 °C and pulp density up to 3% (w/v). The results revealed that in Erlenmeyer flask experiments 97% of V and 50% of Ni were extracted while using spent medium produced in bubble column bioreactor, V and Ni recoveries were achieved to 100% and 33%, respectively. These recovery yields indicate that in both scales almost total vanadium can be recovered, while nickel recovery was lower. With help of the bioreactor spent-medium nickel recovery yield was lower than that of obtained from the flask experiments, which it could be due to precipitation of some values of Ni in presence of high levels of oxalic acid existing in its spent medium.Keywords: Aspergillus niger, bubble column bioreactor, oil-fired ash, spent-medium bioleaching
Procedia PDF Downloads 227841 Relationship between Glycated Hemoglobin in Adolescents with Type 1 Diabetes Mellitus and Parental Anxiety and Depression
Authors: Evija Silina, Maris Taube, Maksims Zolovs
Abstract:
Background: Type 1 diabetes mellitus (T1D) is the most common chronic endocrine pathology in children. The management of type 1 diabetes requires a strong diet, physical activity, lifelong insulin therapy, and proper self-monitoring of blood glucose and is usually complicated and, therefore, may result in a variety of psychosocial problems for children, adolescents, and their families. Metabolic control of the disease is determined by glycated haemoglobin (HbA1c), the main criterion for diabetes compensation. A correlation was observed between anxiety and depression levels and glycaemic control in many previous studies. It is assumed that anxiety and depression symptoms negatively affect glycaemic control. Parental psychological distress was associated with higher child self-report of stress and depressive symptoms, and it had negative effects on diabetes management. Objective: The main objective of this paper is to evaluate the relationship between parental mental health conditions (depression and anxiety) and metabolic control of their adolescents with T1DM. Methods: This cross-sectional study recruited adolescents with T1D (N=251) and their parents (N=251). The respondents completed questionnaires. The 7-item Generalized Anxiety Disorder (GAD-7) scale measured anxiety level; The Patient Health Questionnaire – 9 (PHQ-9) measured depressive symptoms. Glycaemic control of patients was assessed using the last glycated haemoglobin (HbA1c) values. GLM mediation analysis was performed to determine the potential mediating effect of the parent’s mental health conditions (depression and anxiety) on the relationship between the mental health conditions (depression and anxiety) of a child on the level of glycated hemoglobin (HbA1c). To test the significance of the mediated effect (ME) for non-normally distributed data, bootstrapping procedures (10,000 bootstrapped samples) were used. Results: 502 respondents were eligible for screening to detect anxiety and depression symptoms. Mediation analysis was performed to assess the mediating role of parent GAD-7 on the linkage between a dependent variable (HbA1c) and independent variables (child GAD-7 un child PHQ-9). The results revealed that the total effect of child GAD-7 (B = 0.479, z = 4.30, p < 0.001) on HbA1c was significant but the total effect of child PHQ-9 (B = 0.166, z = 1.49, p = 0.135) was not significant. With the inclusion of the mediating variable (parent GAD-7), the impact of child GAD-7 on HbA1c was found insignificant (B = 0.113, z=0.98, p = 0.326), the impact of child PHQ-9 on HbA1c was found also insignificant (B = 0.068, z=0.74, p = 0.458). The indirect effect of child GAD-7 on HbA1c through parent GAD-7 was found significant (B = 0.366, z = 4.31, p < 0.001) and the indirect effect of child PHQ-9 on HbA1c through parent GAD-7 was found also significant (B = 0.098, z = 2.56, p = 0.010). This indicates that the relationship between a dependent variable (HbA1c) and independent variables (child GAD-7 un child PHQ-9) is fully mediated by parent GAD-7. Conclusion: The main result suggests that glycated haemoglobin in adolescents with Type 1 diabetes is related to adolescents’ mental health via parents’ anxiety. It means that parents’ anxiety plays a more significant role in the level of glycated haemoglobin in adolescents than depression and anxiety in the adolescent.Keywords: type 1 diabetes, adolescents, parental diabetes-specific mental health conditions, glycated haemoglobin, anxiety, depression
Procedia PDF Downloads 77840 Rehabilitation of Orthotropic Steel Deck Bridges Using a Modified Ortho-Composite Deck System
Authors: Mozhdeh Shirinzadeh, Richard Stroetmann
Abstract:
Orthotropic steel deck bridge consists of a deck plate, longitudinal stiffeners under the deck plate, cross beams and the main longitudinal girders. Due to the several advantages, Orthotropic Steel Deck (OSD) systems have been utilized in many bridges worldwide. The significant feature of this structural system is its high load-bearing capacity while having relatively low dead weight. In addition, cost efficiency and the ability of rapid field erection have made the orthotropic steel deck a popular type of bridge worldwide. However, OSD bridges are highly susceptible to fatigue damage. A large number of welded joints can be regarded as the main weakness of this system. This problem is, in particular, evident in the bridges which were built before 1994 when the fatigue design criteria had not been introduced in the bridge design codes. Recently, an Orthotropic-composite slab (OCS) for road bridges has been experimentally and numerically evaluated and developed at Technische Universität Dresden as a part of AIF-FOSTA research project P1265. The results of the project have provided a solid foundation for the design and analysis of Orthotropic-composite decks with dowel strips as a durable alternative to conventional steel or reinforced concrete decks. In continuation, while using the achievements of that project, the application of a modified Ortho-composite deck for an existing typical OSD bridge is investigated. Composite action is obtained by using rows of dowel strips in a clothoid (CL) shape. Regarding Eurocode criteria for different fatigue detail categories of an OSD bridge, the effect of the proposed modification approach is assessed. Moreover, a numerical parametric study is carried out utilizing finite element software to determine the impact of different variables, such as the size and arrangement of dowel strips, the application of transverse or longitudinal rows of dowel strips, and local wheel loads. For the verification of the simulation technique, experimental results of a segment of an OCS deck are used conducted in project P1265. Fatigue assessment is performed based on the last draft of Eurocode 1993-2 (2024) for the most probable detail categories (Hot-Spots) that have been reported in the previous statistical studies. Then, an analytical comparison is provided between the typical orthotropic steel deck and the modified Ortho-composite deck bridge in terms of fatigue issues and durability. The load-bearing capacity of the bridge, the critical deflections, and the composite behavior are also evaluated and compared. Results give a comprehensive overview of the efficiency of the rehabilitation method considering the required design service life of the bridge. Moreover, the proposed approach is assessed with regard to the construction method, details and practical aspects, as well as the economic point of view.Keywords: composite action, fatigue, finite element method, steel deck, bridge
Procedia PDF Downloads 81839 LaeA/1-Velvet Interplay in Aspergillus and Trichoderma: Regulation of Secondary Metabolites and Cellulases
Authors: Razieh Karimi Aghcheh, Christian Kubicek, Joseph Strauss, Gerhard Braus
Abstract:
Filamentous fungi are of considerable economic and social significance for human health, nutrition and in white biotechnology. These organisms are dominant producers of a range of primary metabolites such as citric acid, microbial lipids (biodiesel) and higher unsaturated fatty acids (HUFAs). In particular, they produce also important but structurally complex secondary metabolites with enormous therapeutic applications in pharmaceutical industry, for example: cephalosporin, penicillin, taxol, zeranol and ergot alkaloids. Several fungal secondary metabolites, which are significantly relevant to human health do not only include antibiotics, but also e.g. lovastatin, a well-known antihypercholesterolemic agent produced by Aspergillus. terreus, or aflatoxin, a carcinogen produced by A. flavus. In addition to their roles for human health and agriculture, some fungi are industrially and commercially important: Species of the ascomycete genus Hypocrea spp. (teleomorph of Trichoderma) have been demonstrated as efficient producer of highly active cellulolytic enzymes. This trait makes them effective in disrupting and depolymerization of lignocellulosic materials and thus applicable tools in number of biotechnological areas as diverse as clothes-washing detergent, animal feed, and pulp and fuel productions. Fungal LaeA/LAE1 (Loss of aflR Expression A) homologs their gene products act at the interphase between secondary metabolisms, cellulase production and development. Lack of the corresponding genes results in significant physiological changes including loss of secondary metabolite and lignocellulose degrading enzymes production. At the molecular level, the encoded proteins are presumably methyltransferases or demethylases which act directly or indirectly at heterochromatin and interact with velvet domain proteins. Velvet proteins bind to DNA and affect expression of secondary metabolites (SMs) genes and cellulases. The dynamic interplay between LaeA/LAE1, velvet proteins and additional interaction partners is the key for an understanding of the coordination of metabolic and morphological functions of fungi and is required for a biotechnological control of the formation of desired bioactive products. Aspergilli and Trichoderma represent different biotechnologically significant species with significant differences in the LaeA/LAE1-Velvet protein machinery and their target proteins. We, therefore, performed a comparative study of the interaction partners of this machinery and the dynamics of the various protein-protein interactions using our robust proteomic and mass spectrometry techniques. This enhances our knowledge about the fungal coordination of secondary metabolism, cellulase production and development and thereby will certainly improve recombinant fungal strain construction for the production of industrial secondary metabolite or lignocellulose hydrolytic enzymes.Keywords: cellulases, LaeA/1, proteomics, secondary metabolites
Procedia PDF Downloads 270838 Viscoelastic Behavior of Human Bone Tissue under Nanoindentation Tests
Authors: Anna Makuch, Grzegorz Kokot, Konstanty Skalski, Jakub Banczorowski
Abstract:
Cancellous bone is a porous composite of a hierarchical structure and anisotropic properties. The biological tissue is considered to be a viscoelastic material, but many studies based on a nanoindentation method have focused on their elasticity and microhardness. However, the response of many organic materials depends not only on the load magnitude, but also on its duration and time course. Depth Sensing Indentation (DSI) technique has been used for examination of creep in polymers, metals and composites. In the indentation tests on biological samples, the mechanical properties are most frequently determined for animal tissues (of an ox, a monkey, a pig, a rat, a mouse, a bovine). However, there are rare reports of studies of the bone viscoelastic properties on microstructural level. Various rheological models were used to describe the viscoelastic behaviours of bone, identified in the indentation process (e. g Burgers model, linear model, two-dashpot Kelvin model, Maxwell-Voigt model). The goal of the study was to determine the influence of creep effect on the mechanical properties of human cancellous bone in indentation tests. The aim of this research was also the assessment of the material properties of bone structures, having in mind the energy aspects of the curve (penetrator loading-depth) obtained in the loading/unloading cycle. There was considered how the different holding times affected the results within trabecular bone.As a result, indentation creep (CIT), hardness (HM, HIT, HV) and elasticity are obtained. Human trabecular bone samples (n=21; mean age 63±15yrs) from the femoral heads replaced during hip alloplasty were removed and drained from alcohol of 1h before the experiment. The indentation process was conducted using CSM Microhardness Tester equipped with Vickers indenter. Each sample was indented 35 times (7 times for 5 different hold times: t1=0.1s, t2=1s, t3=10s, t4=100s and t5=1000s). The indenter was advanced at a rate of 10mN/s to 500mN. There was used Oliver-Pharr method in calculation process. The increase of hold time is associated with the decrease of hardness parameters (HIT(t1)=418±34 MPa, HIT(t2)=390±50 MPa, HIT(t3)= 313±54 MPa, HIT(t4)=305±54 MPa, HIT(t5)=276±90 MPa) and elasticity (EIT(t1)=7.7±1.2 GPa, EIT(t2)=8.0±1.5 GPa, EIT(t3)=7.0±0.9 GPa, EIT(t4)=7.2±0.9 GPa, EIT(t5)=6.2±1.8 GPa) as well as with the increase of the elastic (Welastic(t1)=4.11∙10-7±4.2∙10-8Nm, Welastic(t2)= 4.12∙10-7±6.4∙10-8 Nm, Welastic(t3)=4.71∙10-7±6.0∙10-9 Nm, Welastic(t4)= 4.33∙10-7±5.5∙10-9Nm, Welastic(t5)=5.11∙10-7±7.4∙10-8Nm) and inelastic (Winelastic(t1)=1.05∙10-6±1.2∙10-7 Nm, Winelastic(t2) =1.07∙10-6±7.6∙10-8 Nm, Winelastic(t3)=1.26∙10-6±1.9∙10-7Nm, Winelastic(t4)=1.56∙10-6± 1.9∙10-7 Nm, Winelastic(t5)=1.67∙10-6±2.6∙10-7)) reaction of materials. The indentation creep increased logarithmically (R2=0.901) with increasing hold time: CIT(t1) = 0.08±0.01%, CIT(t2) = 0.7±0.1%, CIT(t3) = 3.7±0.3%, CIT(t4) = 12.2±1.5%, CIT(t5) = 13.5±3.8%. The pronounced impact of creep effect on the mechanical properties of human cancellous bone was observed in experimental studies. While the description elastic-inelastic, and thus the Oliver-Pharr method for data analysis, may apply in few limited cases, most biological tissues do not exhibit elastic-inelastic indentation responses. Viscoelastic properties of tissues may play a significant role in remodelling. The aspect is still under an analysis and numerical simulations. Acknowledgements: The presented results are part of the research project founded by National Science Centre (NCN), Poland, no.2014/15/B/ST7/03244.Keywords: bone, creep, indentation, mechanical properties
Procedia PDF Downloads 171837 Attitudes of Gratitude: An Analysis of 30 Cancer Patient Narratives Published by Leading U.S. Cancer Care Centers
Authors: Maria L. McLeod
Abstract:
This study examines the ways in which cancer patient narratives are portrayed and framed on the websites of three leading U.S. cancer care centers –The University of Texas MD Anderson Cancer Center in Houston, Memorial Sloan Kettering Cancer Center in New York, and Seattle Cancer Care Alliance. Thirty patient stories, ten from each cancer center website blog, were analyzed using qualitative and quantitative textual analysis of unstructured data, documenting repeated use of specific metaphors and tropes while charting common themes and other elements of story structure and content. Patient narratives were coded using grounded theory as the basis for conducting emergent qualitative research. As part of a systematic, inductive approach to collecting and analyzing data, recurrent and unique themes were examined and compared in terms of positive and negative framing, patient agency, and institutional praise. All three of these cancer care centers are teaching hospitals with university affiliations, that emphasizes an evidence-based scientific approach to treatment that utilizes the latest research and cutting-edge techniques and technology. Thus, the use of anecdotal evidence presented in patient narratives could be perceived as being in conflict with this evidence-based model, as the patient stories are not an accurate representation of scientific outcomes related to developing cancer, cancer reoccurrence, or cancer outcomes. The representative patient narratives tend to exclude or downplay adverse responses to treatment, survival rates, integrative and/or complementary cancer treatments, cancer prevention and causes, and barriers to treatment, such as the limitation of insurance plans, costs of treatment, and/or other issues related to access, potentially contributing to false narratives and inaccurate notions of cancer prevention, cancer care treatment and the potential for a cure. Both quantitative and qualitative findings demonstrate that cancer patient stories featured on the blogsites of the nation’s top cancer care centers deemphasize patient agency and, instead, emphasize deference and gratitude toward the institutions where the featured patients received treatment. Along these lines, language choices reflect positive framing of the cancer experience. Accompanying portrait photos of healthy appearing subjects as well as positive-framed headlines, subheads, and pull quotes function similarly, reflecting hopeful, transformative experiences and outcomes over hardship and suffering. Although patient narratives include real, factual scientific details and descriptions of actual events, the stories lack references to more negative realities of cancer diagnosis and treatment. Instead, they emphasize the triumph of survival by which the cancer care center, in the savior/hero role, enables the patient’s success, represented as a cathartic medical journey.Keywords: cancer framing, cancer stories, medical gaze, patient narratives
Procedia PDF Downloads 160836 A Multilingual App for Studying Children’s Developing Values: Developing a New Arabic Translation of the Picture-based Values Survey and Comparison of Palestinian and Jewish Children in Israel
Authors: Aysheh Maslamani, Ella Daniel, Anna Dӧring, Iyas Nasser, Ariel Knafo-Noam
Abstract:
Over 250 million people globally speak Arabic, one of the most widespread languages in the world, as their first language. Yet only a minuscule fraction of developmental research studies Middle East children. As values are a core component of culture, understanding how values develop is key to understanding development across cultures. Indeed, with the advent of research on value development, significantly since the introduction of the Picture-Based Value Survey for Children, interest in cross-cultural differences in children's values is increasing. As no measure exists for Arab children, PBVS-C in Arabic developed. The online application version of the PBVS-C that can be administered on a computer, tablet, or even a smartphone to measure the 10 values whose presence has been repeatedly demonstrated across the world. The application has been developed simultaneously in Hebrew and Arabic and can easily be adapted to include additional languages. In this research, the development of the multilingual PBVS-C application version adapted for five-year-olds. The translation process discussed (including important decisions such as which dialect of Arabic, a diglossic language, is most suitable), adaptations to subgroups (e.g., Muslim, Druze and Christian Arab children), and using recorded instructions and value item captions, as well as touchscreens to enhance applicability with young children. Four hundred Palestinian and Israeli 5-12 year old children reported their values using the app (50% in Arabic, 50% in Hebrew). Confirmatory Multidimensional Scaling (MDS) analyses revealed structural patterns that closely correspond to Schwartz's theoretical structure in both languages (e.g., universalism values correlated positively with benevolence and negatively with power, whereas tradition correlated negatively with hedonism and positively with conformity). Replicating past findings, power values showed lower importance than benevolence values in both cultural groups, and there were gender differences in which girls were higher in self-transcendence values and lower in self-enhancement values than boys. Cultural value importance differences were explored and revealed that Palestinian children are significantly higher in tradition and achievement values compared to Israeli children, whereas Israeli children are significantly higher in benevolence, hedonism, self-direction, and stimulation values. Age differences in value coherence across the two groups were also studied. Exploring the cultural differences opens a window to understanding the basic motivations driving populations that were hardly studied before. This study will contribute to the developmental value research since it considers the role of critical variables such as culture and religion and tests value coherence across middle childhood. Findings will be discussed, and the potential and limitations of the computerized PBVS-C concerning future values research.Keywords: Arab-children, culture, multilingual-application, value-development
Procedia PDF Downloads 114835 Introduction of a New and Efficient Nematicide, Abamectin by Gyah Corporation, Iran, for Root-knot Nematodes Management Planning Programs
Authors: Shiva Mardani, Mehdi Nasr-Esfahani, Majid Olia, Hamid Molahosseini, Hamed Hassanzadeh Khankahdani
Abstract:
Plant-parasitic nematodes cause serious diseases on plants and effectively reduce food production in quality and quantity worldwide, with at least 17 nematode species in the three important and major genera, including Meloidogyne, Heterodera, and Pratylenchus. Root-knot nematodes (RKN), Meloidogyne spp. with the dominant species, Meloidogynejavanica, are considered as the important plant pathogens of agricultural products globally. The hosts range can be vegetables, bedding plants, grasses, shrubs, numerous weeds, and trees, including forests. In this study, chemical management was carried out on RKN, M. javanica, to investigate the efficacy of Iranian Abamectin insecticide product [acaricide Abamectin (Vermectin® 2% EC, Gyah Corp., Iran)] verses imported normal Abamectin available in the Iran markets [acaricide Abamectin (Vermectin® 1.8% EC, Cropstar Chemical Industry Co., Ltd.)] each of which at the rate of 8 L./ha, on Tomatoes, Solanumlycopersicum L., (No. 29-41, Dutch company Siemens) as a test plant, and the controls (infested to RKN and without any chemical pesticides treatments); and (sterile soil without any RKN and chemical pesticides treatments) at the greenhouse in Isfahan, Iran. The trails were repeated thrice. The results indicated a highly significant reduction in RKN population and an increase in biomass parameters at 1% level of significance, respectively. Relatively similar results were obtained in all the three experiments conducted on tomato root-knot nematodes. The treatments of Gyah-Abamectin (51.6%) and external Abamectin (40.4%) had the highest to least effect on reducing the number of larvae in the soil compared to the infected controls, respectively. Gyah-Abamectin by 44.1% and then external one by 31.9% had the highest effect on reducing the number of larvae and eggs in the root and 31.4% and 24.1% reduction in the number of galls compared to the infected controls, respectively. Based on priority, Gyah-Abamectin (47.4 % ) and external Abamectin (31.1 %) treatments had the highest effect on reducing the number of egg- masses in the root compared to the infected controls, with no significant difference between Gyah-Abamectin and external Abamectin. The highest reproduction of larvae and egg in the root was observed in the infected controls (75.5%) and the lowest in the healthy controls (0.0%). The highest reduction in the larval and egg reproduction in the roots compared to the infected controls was observed in Gyah-Abamectin and the lowest in the external one. Based on preference, Gyah-Abamectin (37.6%) and external Abamectin (26.9%) had the highest effect on the reduction of the larvae and egg reproduction in the root compared to the infected controls, respectively. Regarding growth parameters factors, the lowest stem length was observed in external Abamectin (51.9 cm), with nosignificantly different from Gyah-Abamectin and healthy controls. The highest root fresh weight was recorded in the infected controls (19.81 gr.) and the lowest in the healthy ones (9.81 gr.); the highest root length in the healthy controls (22.4 cm), and the lowest in the infected controls and external Abamectin (12.6 and 11.9 cm), respectively. Conclusively, the results of these three tests on tomato plants revealed that Gyah-Abamectin 2% compared to external Abamectin 1.8% is competitive in the chemical management of the root nematodes of these types of products and is a suitable alternative in this regard.Keywords: solanum lycopersicum, vermectin, biomass, tomato
Procedia PDF Downloads 93834 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity
Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido
Abstract:
Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens
Procedia PDF Downloads 287833 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89
Authors: A. Chatel, I. S. Torreguitart, T. Verstraete
Abstract:
The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness
Procedia PDF Downloads 109832 Impact of Insect-Feeding and Fire-Heating Wounding on Wood Properties of Lodgepole Pine
Authors: Estelle Arbellay, Lori D. Daniels, Shawn D. Mansfield, Alice S. Chang
Abstract:
Mountain pine beetle (MPB) outbreaks are currently devastating lodgepole pine forests in western North America, which are also widely disturbed by frequent wildfires. Both MPB and fire can leave scars on lodgepole pine trees, thereby diminishing their commercial value and possibly compromising their utilization in solid wood products. In order to fully exploit the affected resource, it is crucial to understand how wounding from these two disturbance agents impact wood properties. Moreover, previous research on lodgepole pine has focused solely on sound wood and stained wood resulting from the MPB-transmitted blue fungi. By means of a quantitative multi-proxy approach, we tested the hypotheses that (i) wounding (of either MPB or fire origin) caused significant changes in wood properties of lodgepole pine and that (ii) MPB-induced wound effects could differ from those induced by fire in type and magnitude. Pith-to-bark strips were extracted from 30 MPB scars and 30 fire scars. Strips were cut immediately adjacent to the wound margin and encompassed 12 rings from normal wood formed prior to wounding and 12 rings from wound wood formed after wounding. Wood properties evaluated within this 24-year window included ring width, relative wood density, cellulose crystallinity, fibre dimensions, and carbon and nitrogen concentrations. Methods used to measure these proxies at a (sub-)annual resolution included X-ray densitometry, X-ray diffraction, fibre quality analysis, and elemental analysis. Results showed a substantial growth release in wound wood compared to normal wood, as both earlywood and latewood width increased over a decade following wounding. Wound wood was also shown to have a significantly different latewood density than normal wood 4 years after wounding. Latewood density decreased in MPB scars while the opposite was true in fire scars. By contrast, earlywood density was presented only minor variations following wounding. Cellulose crystallinity decreased in wound wood compared to normal wood, being especially diminished in MPB scars the first year after wounding. Fibre dimensions also decreased following wounding. However, carbon and nitrogen concentrations did not substantially differ between wound wood and normal wood. Nevertheless, insect-feeding and fire-heating wounding were shown to significantly alter most wood properties of lodgepole pine, as demonstrated by the existence of several morphological anomalies in wound wood. MPB and fire generally elicited similar anomalies, with the major exception of latewood density. In addition to providing quantitative criteria for differentiating between biotic (MPB) and abiotic (fire) disturbances, this study provides the wood industry with fundamental information on the physiological response of lodgepole pine to wounding in order to evaluate the utilization of scarred trees in solid wood products.Keywords: elemental analysis, fibre quality analysis, lodgepole pine, wood properties, wounding, X-ray densitometry, X-ray diffraction
Procedia PDF Downloads 319831 An Evaluation of a First Year Introductory Statistics Course at a University in Jamaica
Authors: Ayesha M. Facey
Abstract:
The evaluation sought to determine the factors associated with the high failure rate among students taking a first-year introductory statistics course. By utilizing Tyler’s Objective Based Model, the main objectives were: to assess the effectiveness of the lecturer’s teaching strategies; to determine the proportion of students who attends lectures and tutorials frequently and to determine the impact of infrequent attendance on performance; to determine how the assigned activities assisted in students understanding of the course content; to ascertain the possible issues being faced by students in understanding the course material and obtain possible solutions to the challenges and to determine whether the learning outcomes have been achieved based on an assessment of the second in-course examination. A quantitative survey research strategy was employed and the study population was students enrolled in semester one of the academic year 2015/2016. A convenience sampling approach was employed resulting in a sample of 98 students. Primary data was collected using self-administered questionnaires over a one-week period. Secondary data was obtained from the results of the second in-course examination. Data were entered and analyzed in SPSS version 22 and both univariate and bivariate analyses were conducted on the information obtained from the questionnaires. Univariate analyses provided description of the sample through means, standard deviations and percentages while bivariate analyses were done using Spearman’s Rho correlation coefficient and Chi-square analyses. For secondary data, an item analysis was performed to obtain the reliability of the examination questions, difficulty index and discriminant index. The examination results also provided information on the weak areas of the students and highlighted the learning outcomes that were not achieved. Findings revealed that students were more likely to participate in lectures than tutorials and that attendance was high for both lectures and tutorials. There was a significant relationship between participation in lectures and performance on examination. However, a high proportion of students has been absent from three or more tutorials as well as lectures. A higher proportion of students indicated that they completed the assignments obtained from the lectures sometimes while they rarely completed tutorial worksheets. Students who were more likely to complete their assignments were significantly more likely to perform well on their examination. Additionally, students faced a number of challenges in understanding the course content and the topics of probability, binomial distribution and normal distribution were the most challenging. The item analysis also highlighted these topics as problem areas. Problems doing mathematics and application and analyses were their major challenges faced by students and most students indicated that some of the challenges could be alleviated if additional examples were worked in lectures and they were given more time to solve questions. Analysis of the examination results showed that a number of learning outcomes were not achieved for a number of topics. Based on the findings recommendations were made that suggested adjustments to grade allocations, delivery of lectures and methods of assessment.Keywords: evaluation, item analysis, Tyler’s objective based model, university statistics
Procedia PDF Downloads 189830 Development of Three-Dimensional Bio-Reactor Using Magnetic Field Stimulation to Enhance PC12 Cell Axonal Extension
Authors: Eiji Nakamachi, Ryota Sakiyama, Koji Yamamoto, Yusuke Morita, Hidetoshi Sakamoto
Abstract:
The regeneration of injured central nerve network caused by the cerebrovascular accidents is difficult, because of poor regeneration capability of central nerve system composed of the brain and the spinal cord. Recently, new regeneration methods such as transplant of nerve cells and supply of nerve nutritional factor were proposed and examined. However, there still remain many problems with the canceration of engrafted cells and so on and it is strongly required to establish an efficacious treating method of a central nerve system. Blackman proposed the electromagnetic stimulation method to enhance the axonal nerve extension. In this study, we try to design and fabricate a new three-dimensional (3D) bio-reactor, which can load a uniform AC magnetic field stimulation on PC12 cells in the extracellular environment for enhancement of an axonal nerve extension and 3D nerve network generation. Simultaneously, we measure the morphology of PC12 cell bodies, axons, and dendrites by the multiphoton excitation fluorescence microscope (MPM) and evaluate the effectiveness of the uniform AC magnetic stimulation to enhance the axonal nerve extension. Firstly, we designed and fabricated the uniform AC magnetic field stimulation bio-reactor. For the AC magnetic stimulation system, we used the laminated silicon steel sheets for a yoke structure of 3D chamber, which had a high magnetic permeability. Next, we adopted the pole piece structure and installed similar specification coils on both sides of the yoke. We searched an optimum pole piece structure using the magnetic field finite element (FE) analyses and the response surface methodology. We confirmed that the optimum 3D chamber structure showed a uniform magnetic flux density in the PC12 cell culture area by using FE analysis. Then, we fabricated the uniform AC magnetic field stimulation bio-reactor by adopting analytically determined specifications, such as the size of chamber and electromagnetic conditions. We confirmed that measurement results of magnetic field in the chamber showed a good agreement with FE results. Secondly, we fabricated a dish, which set inside the uniform AC magnetic field stimulation of bio-reactor. PC12 cells were disseminated with collagen gel and could be 3D cultured in the dish. The collagen gel were poured in the dish. The collagen gel, which had a disk shape of 6 mm diameter and 3mm height, was set on the membrane filter, which was located at 4 mm height from the bottom of dish. The disk was full filled with the culture medium inside the dish. Finally, we evaluated the effectiveness of the uniform AC magnetic field stimulation to enhance the nurve axonal extension. We confirmed that a 6.8 increase in the average axonal extension length of PC12 under the uniform AC magnetic field stimulation at 7 days culture in our bio-reactor, and a 24.7 increase in the maximum axonal extension length. Further, we confirmed that a 60 increase in the number of dendrites of PC12 under the uniform AC magnetic field stimulation. Finally, we confirm the availability of our uniform AC magnetic stimulation bio-reactor for the nerve axonal extension and the nerve network generation.Keywords: nerve regeneration, axonal extension , PC12 cell, magnetic field, three-dimensional bio-reactor
Procedia PDF Downloads 166829 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis
Authors: Bangkit Adhi Nugraha, Bambang Suripto
Abstract:
On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016
Procedia PDF Downloads 376828 Decorative Plant Motifs in Traditional Art and Craft Practices: Pedagogical Perspectives
Authors: Geetanjali Sachdev
Abstract:
This paper explores the decorative uses of plant motifs and symbols in traditional Indian art and craft practices in order to assess their pedagogical significance within the context of plant study in higher education in art and design. It examines existing scholarship on decoration and plants in Indian art and craft practices. The impulse to elaborate upon an existing form or surface is an intrinsic part of many Indian traditional art and craft traditions where a deeply ingrained love for decoration exists. Indian craftsmen use an array of motifs and embellishments to adorn surfaces across a range of practices, and decoration is widely seen in textiles, jewellery, temple sculptures, vehicular art, architecture, and various other art, craft, and design traditions. Ornamentation in Indian cultural traditions has been attributed to religious and spiritual influences in the lives of India’s art and craft practitioners. Through adornment, surfaces and objects were ritually transformed to function both spiritually and physically. Decorative formations facilitate spiritual development and attune our minds to concepts that support contemplation. Within practices of ornamentation and adornment, there is extensive use of botanical motifs as Indian art and craft practitioners have historically been drawn towards nature as a source of inspiration. This is due to the centrality of agriculture in the lives of Indian people as well as in religion, where plants play a key role in religious rituals and festivals. Plant representations thus abound in two-dimensional and three-dimensional surface designs and patterns where the motifs range from being realistic, highly stylized, and curvilinear forms to geometric and abstract symbols. Existing scholarship reveals that these botanical embellishments reference a wide range of plants that include native and non-indigenous plants, as well as imaginary and mythical plants. Structural components of plant anatomy, such as leaves, stems, branches and buds, and flowers, are part of the repertoire of design motifs used, as are plant forms indicating different stages of growth, such as flowering buds and flowers in full bloom. Symmetry is a characteristic feature, and within the decorative register of various practices, plants are part of border zones and bands, connecting corners and all-over patterns, used as singular motifs and floral sprays on panels, and as elements within ornamental scenes. The results of the research indicate that decoration as a mode of inquiry into plants can serve as a platform to learn about local and global biodiversity and plant anatomy and develop artistic modes of thinking symbolically, metaphorically, imaginatively, and relationally about the plant world. The conclusion is drawn that engaging with ornamental modes of plant representation in traditional Indian art and craft practices is pedagogically significant for two reasons. Decoration as a mode of engagement cultivates both botanical and artistic understandings of plants. It also links learners with the indigenous art and craft traditions of their own culture.Keywords: art and design pedagogy, decoration, plant motifs, traditional art and craft
Procedia PDF Downloads 84827 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking
Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz
Abstract:
Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.Keywords: anatomy, e-learning, virtual reality, 3D model marking
Procedia PDF Downloads 99826 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 336825 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change
Authors: Damian Islas
Abstract:
Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism
Procedia PDF Downloads 117824 Meeting the Health Needs of Adolescents and Young Adults: Developing and Evaluating an Electronic Questionnaire and Health Report Form, for the Health Assessment at Youth Health Clinics – A Mixed Methods Project
Authors: P.V. Lostelius, M.Mattebo, E. Thors Adolfsson, A. Söderlund, Å. Revenäs
Abstract:
Adolescents are vulnerable in healthcare settings. Early detection of poor health in young people is important to support a good quality of life and adult social functioning. Youth Health Clinics (YHCs) in Sweden provide healthcare for young people ages 13-25 years old. Using an overall mixed methods approach, the project’s main objective was to develop and evaluate an electronic health system, including a health questionnaire, a case report form, and an evaluation questionnaire to assess young people’s health risks in early stages, increase health, and quality of life. In total, 72 young people, 16-23 years old, eleven healthcare professionals and eight researchers participated in the three project studies. Results from interviews with fifteen young people gave that an electronic health questionnaire should include questions about physical-, mental-, sexual health and social support. It should specifically include questions about self-harm and suicide risk. The young people said that the questionnaire should be appealing, based on young people’s needs and be user-friendly. It was important that young people felt safe when responding to the questions, both physically and electronically. Also, they found that it had the potential to support the face-to face-meeting between young people and healthcare professionals. The electronic health report system was developed by the researchers, performing a structured development of the electronic health questionnaire, construction of a case report form to present the results from the health questions, along with an electronic evaluation questionnaire. An Information Technology company finalized the development by digitalizing the electronic health system. Four young people, three healthcare professionals and seven researchers evaluated the usability using interviews and a usability questionnaire. The electronic health questionnaire was found usable for YHCs but needed some clarifications. Essentially, the system succeeded in capturing the overall health of young people; it should be able to keep the interest of young people and have the potential to contribute to health assessment planning and young people’s self-reflection, sharing vulnerable feelings with healthcare professionals. In advance of effect studies, a feasibility study was performed by collecting electronic questionnaire data from 54 young people and interview data from eight healthcare professionals to assess the feasibility of the use of the electronic evaluation questionnaire, the case report form, and the planned recruitment method. When merging the results, the research group found that the evaluation questionnaire and the health report were feasible for future research. However, the COVID-19 pandemic, commitment challenges and drop-outs affected the recruitment of young people. Also, some healthcare professionals felt insecure about using computers and electronic devices and worried that their workload would increase. This project contributes knowledge about the development and use of electronic health tools for young people. Before implementation, clinical routines need for using the health report system need to be considered.Keywords: adolescent health, developmental studies, electronic health questionnaire, mixed methods research
Procedia PDF Downloads 106823 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?
Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire
Abstract:
The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.Keywords: basel 3, fair value, securitization, long term investment, banks, insurers
Procedia PDF Downloads 289822 Artificial Cells Capable of Communication by Using Polymer Hydrogel
Authors: Qi Liu, Jiqin Yao, Xiaohu Zhou, Bo Zheng
Abstract:
The first artificial cell was produced by Thomas Chang in the 1950s when he was trying to make a mimic of red blood cells. Since then, many different types of artificial cells have been constructed from one of the two approaches: a so-called bottom-up approach, which aims to create a cell from scratch, and a top-down approach, in which genes are sequentially knocked out from organisms until only the minimal genome required for sustaining life remains. In this project, bottom-up approach was used to build a new cell-free expression system which mimics artificial cell that capable of protein expression and communicate with each other. The artificial cells constructed from the bottom-up approach are usually lipid vesicles, polymersomes, hydrogels or aqueous droplets containing the nucleic acids and transcription-translation machinery. However, lipid vesicles based artificial cells capable of communication present several issues in the cell communication research: (1) The lipid vesicles normally lose the important functions such as protein expression within a few hours. (2) The lipid membrane allows the permeation of only small molecules and limits the types of molecules that can be sensed and released to the surrounding environment for chemical communication; (3) The lipid vesicles are prone to rupture due to the imbalance of the osmotic pressure. To address these issues, the hydrogel-based artificial cells were constructed in this work. To construct the artificial cell, polyacrylamide hydrogel was functionalized with Acrylate PEG Succinimidyl Carboxymethyl Ester (ACLT-PEG2000-SCM) moiety on the polymer backbone. The proteinaceous factors can then be immobilized on the polymer backbone by the reaction between primary amines of proteins and N-hydroxysuccinimide esters (NHS esters) of ACLT-PEG2000-SCM, the plasmid template and ribosome were encapsulated inside the hydrogel particles. Because the artificial cell could continuously express protein with the supply of nutrients and energy, the artificial cell-artificial cell communication and artificial cell-natural cell communication could be achieved by combining the artificial cell vector with designed plasmids. The plasmids were designed referring to the quorum sensing (QS) system of bacteria, which largely relied on cognate acyl-homoserine lactone (AHL) / transcription pairs. In one communication pair, “sender” is the artificial cell or natural cell that can produce AHL signal molecule by synthesizing the corresponding signal synthase that catalyzed the conversion of S-adenosyl-L-methionine (SAM) into AHL, while the “receiver” is the artificial cell or natural cell that can sense the quorum sensing signaling molecule form “sender” and in turn express the gene of interest. In the experiment, GFP was first immobilized inside the hydrogel particle to prove that the functionalized hydrogel particles could be used for protein binding. After that, the successful communication between artificial cell-artificial cell and artificial cell-natural cell was demonstrated, the successful signal between artificial cell-artificial cell or artificial cell-natural cell could be observed by recording the fluorescence signal increase. The hydrogel-based artificial cell designed in this work can help to study the complex communication system in bacteria, it can also be further developed for therapeutic applications.Keywords: artificial cell, cell-free system, gene circuit, synthetic biology
Procedia PDF Downloads 150821 The Artificial Intelligence Driven Social Work
Authors: Avi Shrivastava
Abstract:
Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.Keywords: social work, artificial intelligence, AI based social work, machine learning, technology
Procedia PDF Downloads 101820 Influence of Cryo-Grinding on Particle Size Distribution of Proso Millet Bran Fraction
Authors: Maja Benkovic, Dubravka Novotni, Bojana Voucko, Duska Curic, Damir Jezek, Nikolina Cukelj
Abstract:
Cryo-grinding is an ultra-fine grinding method used in the pharmaceutical industry, production of herbs and spices and in the production and handling of cereals, due to its ability to produce powders with small particle sizes which maintain their favorable bioactive profile. The aim of this study was to determine the particle size distributions of the proso millet (Panicum miliaceum) bran fraction grinded at cryogenic temperature (using liquid nitrogen (LN₂) cooling, T = - 196 °C), in comparison to non-cooled grinding. Proso millet bran is primarily used as an animal feed, but has a potential in food applications, either as a substrate for extraction of bioactive compounds or raw material in the bakery industry. For both applications finer particle sizes of the bran could be beneficial. Thus, millet bran was ground for 2, 4, 8 and 12 minutes using the ball mill (CryoMill, Retsch GmbH, Haan, Germany) at three grinding modes: (I) without cooling, (II) at cryo-temperature, and (III) at cryo-temperature with included 1 minute of intermediate cryo-cooling step after every 2 minutes of grinding, which is usually applied when samples require longer grinding times. The sample was placed in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm). The oscillation frequency in all three modes was 30 Hz. Particle size distributions of the bran were determined by a laser diffraction particle sizing method (Mastersizer 2000) using the Scirocco 2000 dry dispersion unit (Malvern Instruments, Malvern, UK). Three main effects of the grinding set-up were visible from the results. Firstly, grinding time at all three modes had a significant effect on all particle size parameters: d(0.1), d(0.5), d(0.9), D[3,2], D[4,3], span and specific surface area. Longer grinding times resulted in lower values of the above-listed parameters, e.g. the averaged d(0.5) of the sample (229.57±1.46 µm) dropped to 51.29±1.28 µm after 2 minutes grinding without LN₂, and additionally to 43.00±1.33 µm after 4 minutes of grinding without LN₂. The only exception was the sample ground for 12 minutes without cooling, where an increase in particle diameters occurred (d(0.5)=62.85±2.20 µm), probably due to particles adhering to one another and forming larger particle clusters. Secondly, samples with LN₂ cooling exhibited lower diameters in comparison to non-cooled. For example, after 8 minutes of non-cooled grinding d(0.5)=46.97±1.05 µm was achieved, while the LN₂ cooling enabled collection of particles with average sizes of d(0.5)=18.57±0.18 µm. Thirdly, the application of intermediate cryo-cooling step resulted in similar particle diameters (d(0.5)=15.83±0.36 µm, 12 min of grinding) as cryo-milling without this step (d(0.5)=16.33±2.09 µm, 12 min of grinding). This indicates that intermediate cooling is not necessary for the current application, which consequently reduces the consumption of LN₂. These results point out the potential beneficial effects of millet bran grinding at cryo-temperatures. Further research will show if the lower particle size achieved in comparison to non-cooled grinding could result in increased bioavailability of bioactive compounds, as well as protein digestibility and solubility of dietary fibers of the proso millet bran fraction.Keywords: ball mill, cryo-milling, particle size distribution, proso millet (Panicum miliaceum) bran
Procedia PDF Downloads 144819 Accuracy of Fitbit Charge 4 for Measuring Heart Rate in Parkinson’s Patients During Intense Exercise
Authors: Giulia Colonna, Jocelyn Hoye, Bart de Laat, Gelsina Stanley, Jose Key, Alaaddin Ibrahimy, Sule Tinaz, Evan D. Morris
Abstract:
Parkinson’s disease (PD) is the second most common neurodegenerative disease and affects approximately 1% of the world’s population. Increasing evidence suggests that aerobic physical exercise can be beneficial in mitigating both motor and non-motor symptoms of the disease. In a recent pilot study of the role of exercise on PD, we sought to confirm exercise intensity by monitoring heart rate (HR). For this purpose, we asked participants to wear a chest strap heart rate monitor (Polar Electro Oy, Kempele). The device sometimes proved uncomfortable. Looking forward to larger clinical trials, it would be convenient to employ a more comfortable and user friendly device. The Fitbit Charge 4 (Fitbit Inc) is a potentially comfortable, user-friendly solution since it is a wrist-worn heart rate monitor. Polar H10 has been used in large trials, and for our purposes, we treated it as the gold standard for the beat-to-beat period (R-R interval) assessment. In previous literature, it has been shown that Fitbit Charge 4 has comparable accuracy to Polar H10 in healthy subjects. It has yet to be determined if the Fitbit is as accurate as the Polar H10 in subjects with PD or in clinical populations, generally. Goal: To compare the Fitbit Charge 4 to the Polar H10 for monitoring HR in PD subjects engaging in an intensive exercise program. Methods: A total of 596 exercise sessions from 11 subjects (6 males) were collected simultaneously by both devices. Subjects with early-stage PD (Hoehn & Yahr <=2) were enrolled in a 6 months exercise training program designed for PD patients. Subjects participated in 3 one-hour exercise sessions per week. They wore both Fitbit and Polar H10 during each session. Sessions included rest, warm-up, intensive exercise, and cool-down periods. We calculated the bias in the HR via Fitbit under rest (5min) and intensive exercise (20min) by comparing the mean HR during each of the periods to the respective means measured by the Polar (HRFitbit – HRPolar). We also measured the sensitivity and specificity of Fitbit for detecting HRs that exceed the threshold for intensive exercise, defined as 70% of an individual’s theoretical maximum HR. Different types of correlation between the two devices were investigated. Results: The mean bias was 1.68 bpm at rest and 6.29 bpm during high intensity exercise, with an overestimation by Fitbit in both conditions. The mean bias of Fitbit across both rest and intensive exercise periods was 3.98 bpm. The sensitivity of the device in identifying high intensity exercise sessions was 97.14 %. The correlation between the two devices was non-linear, suggesting a saturation tendency of Fitbit to saturate at high values of HR. Conclusion: The performance of Fitbit Charge 4 is comparable to Polar H10 for assessing exercise intensity in a cohort of PD subjects. The device should be considered a reasonable replacement for the more cumbersome chest strap technology in future similar studies of clinical populations.Keywords: fitbit, heart rate measurements, parkinson’s disease, wrist-wearable devices
Procedia PDF Downloads 106818 Academic Knowledge Transfer Units in the Western Balkans: Building Service Capacity and Shaping the Business Model
Authors: Andrea Bikfalvi, Josep Llach, Ferran Lazaro, Bojan Jovanovski
Abstract:
Due to the continuous need to foster university-business cooperation in both developed and developing countries, some higher education institutions face the challenge of designing, piloting, operating, and consolidating knowledge and technology transfer units. University-business cooperation has different maturity stages worldwide, with some higher education institutions excelling in these practices, but with lots of others that could be qualified as intermediate, or even some situated at the very beginning of their knowledge transfer adventure. These latter face the imminent necessity to formally create the technology transfer unit and to draw its roadmap. The complexity of this operation is due to various aspects that need to align and coordinate, including a major change in mission, vision, structure, priorities, and operations. Qualitative in approach, this study presents 5 case studies, consisting of higher education institutions located in the Western Balkans – 2 in Albania, 2 in Bosnia and Herzegovina, 1 in Montenegro- fully immersed in the entrepreneurial journey of creating their knowledge and technology transfer unit. The empirical evidence is developed in a pan-European project, illustratively called KnowHub (reconnecting universities and enterprises to unleash regional innovation and entrepreneurial activity), which is being implemented in three countries and has resulted in at least 15 pilot cooperation agreements between academia and business. Based on a peer-mentoring approach including more experimented and more mature technology transfer models of European partners located in Spain, Finland, and Austria, a series of initial lessons learned are already available. The findings show that each unit developed its tailor-made approach to engage with internal and external stakeholders, offer value to the academic staff, students, as well as business partners. The latest technology underpinning KnowHub services and institutional commitment are found to be key success factors. Although specific strategies and plans differ, they are based on a general strategy jointly developed and based on common tools and methods of strategic planning and business modelling. The main output consists of providing good practice for designing, piloting, and initial operations of units aiming to fully valorise knowledge and expertise available in academia. Policymakers can also find valuable hints on key aspects considered vital for initial operations. The value of this contribution is its focus on the intersection of three perspectives (service orientation, organisational innovation, business model) since previous research has only relied on a single topic or dual approaches, most frequently in the business context and less frequently in higher education.Keywords: business model, capacity building, entrepreneurial education, knowledge transfer
Procedia PDF Downloads 139817 Analytical, Numerical, and Experimental Research Approaches to Influence of Vibrations on Hydroelastic Processes in Centrifugal Pumps
Authors: Dinara F. Gaynutdinova, Vladimir Ya Modorsky, Nikolay A. Shevelev
Abstract:
The problem under research is that of unpredictable modes occurring in two-stage centrifugal hydraulic pump as a result of hydraulic processes caused by vibrations of structural components. Numerical, analytical and experimental approaches are considered. A hypothesis was developed that the problem of unpredictable pressure decrease at the second stage of centrifugal pumps is caused by cavitation effects occurring upon vibration. The problem has been studied experimentally and theoretically as of today. The theoretical study was conducted numerically and analytically. Hydroelastic processes in dynamic “liquid – deformed structure” system were numerically modelled and analysed. Using ANSYS CFX program engineering analysis complex and computing capacity of a supercomputer the cavitation parameters were established to depend on vibration parameters. An influence domain of amplitudes and vibration frequencies on concentration of cavitation bubbles was formulated. The obtained numerical solution was verified using CFM program package developed in PNRPU. The package is based on a differential equation system in hyperbolic and elliptic partial derivatives. The system is solved by using one of finite-difference method options – the particle-in-cell method. The method defines the problem solution algorithm. The obtained numerical solution was verified analytically by model problem calculations with the use of known analytical solutions of in-pipe piston movement and cantilever rod end face impact. An infrastructure consisting of an experimental fast hydro-dynamic processes research installation and a supercomputer connected by a high-speed network, was created to verify the obtained numerical solutions. Physical experiments included measurement, record, processing and analysis of data for fast processes research by using National Instrument signals measurement system and Lab View software. The model chamber end face oscillated during physical experiments and, thus, loaded the hydraulic volume. The loading frequency varied from 0 to 5 kHz. The length of the operating chamber varied from 0.4 to 1.0 m. Additional loads weighed from 2 to 10 kg. The liquid column varied from 0.4 to 1 m high. Liquid pressure history was registered. The experiment showed dependence of forced system oscillation amplitude on loading frequency at various values: operating chamber geometrical dimensions, liquid column height and structure weight. Maximum pressure oscillation (in the basic variant) amplitudes were discovered at loading frequencies of approximately 1,5 kHz. These results match the analytical and numerical solutions in ANSYS and CFM.Keywords: computing experiment, hydroelasticity, physical experiment, vibration
Procedia PDF Downloads 243816 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling
Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather
Abstract:
New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling
Procedia PDF Downloads 190815 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States
Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala
Abstract:
Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work
Procedia PDF Downloads 171