Search results for: environmental control
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16555

Search results for: environmental control

1315 Analysis of Superconducting and Optical Properties in Atomic Layer Deposition and Sputtered Thin Films for Next-Generation Single-Photon Detectors

Authors: Nidhi Choudhary, Silke A. Peeters, Ciaran T. Lennon, Dmytro Besprozvannyy, Harm C. M. Knoops, Robert H. Hadfield

Abstract:

Superconducting Nanowire Single Photon Detectors (SNSPDs) have become leading devices in quantum optics and photonics, known for their exceptional efficiency in detecting single photons from ultraviolet to mid-infrared wavelengths with minimal dark counts, low noise, and reduced timing jitter. Recent advancements in materials science focus attention on refractory metal thin films such as NbN and NbTiN to enhance the optical properties and superconducting performance of SNSPDs, opening the way for next-generation detectors. These films have been deposited by several different techniques, such as atomic layer deposition (ALD), plasma pro-advanced plasma processing (ASP) and magnetron sputtering. The fabrication flexibility of these films enables precise control over morphology, crystallinity, stoichiometry and optical properties, which is crucial for optimising the SNSPD performance. Hence, it is imperative to study the optical and superconducting properties of these materials across a wide range of wavelengths. This study provides a comprehensive analysis of the optical and superconducting properties of some important materials in this category (NbN, NbTiN) by different deposition methods. Using Variable angle ellipsometry spectroscopy (VASE), we measured the refractive index, extinction, and absorption coefficient across a wide wavelength range (200-1700 nm) to enhance light confinement for optical communication devices. The critical temperature and sheet resistance were measured using a four-probe method in a custom-built, cryogen-free cooling system with a Sumitomo RDK-101D cold head and CNA-11C compressor. Our results indicate that ALD-deposited NbN shows a higher refractive index and extinction coefficient in the near-infrared region (~1500 nm) than sputtered NbN of the same thickness. Further, the analysis of the optical properties of plasma pro-ASP deposited NbTiN was performed at different substrate bias voltages and different thicknesses. The analysis of substrate bias voltage indicates that the maximum value of the refractive index and extinction coefficient observed for the substrate biasing of 50-80 V across a substrate bias range of (0 V - 150 V). The optical properties of sputtered NbN films are also investigated in terms of the different substrate temperatures during deposition (100 °C-500 °C). We find the higher the substrate temperature during deposition, the higher the value of the refractive index and extinction coefficient has been observed. In all our superconducting thin films ALD-deposited NbN films possess the highest critical temperature (~12 K) compared to sputtered (~8 K) and plasma pro-ASP (~5 K).

Keywords: optical communication, thin films, superconductivity, atomic layer deposition (ALD), niobium nitride (NbN), niobium titanium nitride (NbTiN), SNSPD, superconducting detector, photon-counting.

Procedia PDF Downloads 17
1314 Detection Kit of Type 1 Diabetes Mellitus with Autoimmune Marker GAD65 (Glutamic Acid Decarboxylase)

Authors: Aulanni’am Aulanni’am

Abstract:

Incidence of Diabetes Mellitus (DM) progressively increasing it became a serious problem in Indonesia and it is a disease that government is priority to be addressed. The longer a person is suffering from diabetes the more likely to develop complications particularly diabetic patients who are not well maintained. Therefore, Incidence of Diabetes Mellitus needs to be done in the early diagnosis of pre-phase of the disease. In this pre-phase disease, already happening destruction of pancreatic beta cells and declining in beta cell function and the sign autoimmunity reactions associated with beta cell destruction. Type 1 DM is a multifactorial disease triggered by genetic and environmental factors, which leads to the destruction of pancreatic beta cells. Early marker of "beta cell autoreactivity" is the synthesis of autoantibodies against 65-kDa protein, which can be a molecule that can be detected early in the disease pathomechanism. The importance of early diagnosis of diabetic patients held in the phase of pre-disease is to determine the progression towards the onset of pancreatic beta cell destruction and take precautions. However, the price for this examination is very expensive ($ 150/ test), the anti-GAD65 abs examination cannot be carried out routinely in most or even in all laboratories in Indonesia. Therefore, production-based Rapid Test Recombinant Human Protein GAD65 with "Reverse Flow Immunchromatography Technique" in Indonesia is believed to reduce costs and improve the quality of care of patients with diabetes in Indonesia. Rapid Test Product innovation is very simple and suitable for screening and routine inspection of GAD65 autoantibodies. In the blood serum of patients with diabetes caused by autoimmunity, autoantibody-GAD65 is a major serologic marker to detect autoimmune reaction because their concentration level of stability.GAD65 autoantibodies can be found 10 years before clinical symptoms of diabetes. Early diagnosis is more focused to detect the presence autontibodi-GAD65 given specification and high sensitivity. Autoantibodies- GAD65 that circulates in the blood is a major indicator of the destruction of the islet cells of the pancreas. Results of research in collaboration with Biofarma has produced GAD65 autoantibodies based Rapid Test had conducted the soft launch of products and has been tested with the results of a sensitivity of 100 percent and a specificity between 90 and 96% compared with the gold standard (import product) which worked based on ELISA method.

Keywords: diabetes mellitus, GAD65 autoantibodies, rapid test, sensitivity, specificity

Procedia PDF Downloads 266
1313 The Effect of Ambient Temperature on the Performance of the Simple and Modified Cycle Gas Turbine Plants

Authors: Ogbe E. E., Ossia. C. V., Saturday. E. G., Ezekwe M. C.

Abstract:

The disparity in power output between a simple and a modified gas turbine plant is noticeable when the gas turbine functions under local environmental conditions that deviate from the standard ISO specifications. Extensive research and literature have demonstrated a well-known direct correlation between ambient temperature and the power output of a gas turbine plant. In this study, the Omotosho gas turbine plant was modified into three different configurations. The reason for the modification is to improve its performance and reduce the fuel consumption and emission rate. Aspen Hysys software was used to simulate both the simple (Omotosho) and the three modified gas turbine plants. The input parameters considered include ambient temperature, air mass flow rate, fuel mass flow rate, water mass flow rate, turbine inlet temperature, compressor efficiency, and turbine efficiency, while the output parameters considered are thermal efficiency, specific fuel consumption, heat rate, emission rate, compressor power, turbine power and power output. The three modified gas turbine power plants incorporate an inlet air cooling system and a heat recovery steam generator. The variations between the modifications are due to additional components or enhancements alongside the inlet air cooling system and heat recovery steam generator incorporated; the first modification has an additional turbine, the second modification has an additional combustion chamber, and the third modification has an additional turbine and combustion chamber. This paper clearly shows ambient temperature effects on both the simple and three modified gas turbine plants. for every 10-degree kelvin increase in ambient temperature, there is an approximate reduction of 3977 kW, 4795 kW, 4681 kW, and 4793 kW of the power output for the simple gas turbine, first, second, and third modifications, respectively. Also, for every 10-degree kelvin increase in temperature, there is a thermal efficiency decrease of 1.22%, 1.45%, 1.43%, and 1.44% for the simple gas turbine, first, second, and third modifications respectively. Low ambient temperature will help save fuel; looking at the high price of fuel presently in Nigeria for every 10 degrees kelvin increase in temperature, there is a specific fuel consumption increase of 0.0074 kg/kWh, 0.0051 kg/kWh, 0.0061 kg/kWh, and 0.0057 kg/kWh for the simple gas turbine, first, second, and third modifications respectively. These findings will aid in accurately evaluating local power generating plants, particularly in hotter regions, for installing gas turbine inlet air cooling (GTIAC) systems.

Keywords: Aspen HYSYS software, Brayton Cycle, modified gas turbine, power plant, simple gas turbine, thermal efficiency.

Procedia PDF Downloads 24
1312 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction

Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso

Abstract:

The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.

Keywords: LiDAR, OBIA, remote sensing, local scale

Procedia PDF Downloads 280
1311 Multiple Intelligences to Improve Pronunciation

Authors: Jean Pierre Ribeiro Daquila

Abstract:

This paper aims to analyze the use of the Theory of Multiple Intelligences as a tool to facilitate students’ learning. This theory, proposed by the American psychologist and educator Howard Gardner, was first established in 1983 and advocates that human beings possess eight intelligence and not only one, as defended by psychologists prior to his theory. These intelligence are bodily-kinesthetic intelligence, musical, linguistic, logical-mathematical, spatial, interpersonal, intrapersonal, and naturalist. This paper will focus on bodily-kinesthetic intelligence. Spatial and bodily-kinesthetic intelligences are sensed by athletes, dancers, and others who use their bodies in ways that exceed normal abilities. These are intelligences that are closely related. A quarterback or a ballet dancer needs to have both an awareness of body motions and abilities as well as a sense of the space involved in the action. Nevertheless, there are many reasons which make classical ballet dance more integrated with other intelligences. Ballet dancers make it look effortless as they move across the stage, from the lifts to the toe points; therefore, there is acting both in the performance of the repertoire and in hiding the pain or physical stress. The ballet dancer has to have great mathematical intelligence to perform a fast allegro; for instance, each movement has to be executed in a specific millisecond. Flamenco dancers need to rely as well on their mathematic abilities, as the footwork requires the ability to make half, two, three, four or even six movements in just one beat. However, the precision of the arm movements is freer than in ballet dance; for this reason, ballet dancers need to be more holistically aware of their movements; therefore, our experiment will test whether this greater attention required by ballet dancers makes them acquire better results in the training sessions when compared to flamenco dancers. An experiment will be carried out in this study by training ballet dancers through dance (four years of experience dancing minimum – experimental group 1); a group of flamenco dancers (four years of experience dancing minimum – experimental group 2). Both experimental groups will be trained in two different domains – phonetics and chemistry – to examine whether there is a significant improvement in these areas compared to the control group (a group of regular students who will receive the same training through a traditional method). However, this paper will focus on phonetic training. Experimental group 1 will be trained with the aid of classical music plus bodily work. Experimental group 2 will be trained with flamenco rhythm and kinesthetic work. We would like to highlight that this study takes dance as an example of a possible area of strength; nonetheless, other types of arts can and should be used to support students, such as drama, creative writing, music and others. The main aim of this work is to suggest that other intelligences, in the case of this study, bodily-kinesthetic, can be used to help improve pronunciation.

Keywords: multiple intelligences, pronunciation, effective pronunciation trainings, short drills, musical intelligence, bodily-kinesthetic intelligence

Procedia PDF Downloads 89
1310 Work-Related Shoulder Lesions and Labor Lawsuits in Brazil: Cross-Sectional Study on Worker Health Actions Developed by Employers

Authors: Reinaldo Biscaro, Luciano R. Ferreira, Leonardo C. Biscaro, Raphael C. Biscaro, Isabela S. Vasconcelos, Laura C. R. Ferreira, Cristiano M. Galhardi, Erica P. Baciuk

Abstract:

Introduction: The present study had the objective to present the profile of workers with shoulder disorders related to labor lawsuits in Brazil. The study analyzed the association between the worker’s health and the actions performed by the companies related to injured professional. The research method performed a retrospective, cross-sectional and quantitative database analysis. The documents of labor lawsuits with shoulder injury registered at the Regional Labor Court in the 15th region (Campinas - São Paulo) were submitted to the medical examination and evaluated during the period from 2012 until 2015. The data collected were age, gender, onset of symptoms, length of service, current occupation, type of shoulder injury, referred complaints, type of acromion, associated or related diseases, company actions as CAT (workplace accident communication), compliance of NR7 by the organization (Environmental Risk Prevention Program - PPRA and Medical Coordination Program in Occupational Health - PCMSO). Results: From the 93 workers evaluated, there was a prevalence of men (58.1%), with a mean age of 42.6 y-o, and 54.8% were included in the age group 35-49 years. Regarding the length of work time in the company, 66.7% have worked for more than 5 years. There was an association between gender and current occupational status (p < 0.005), with predominance of women in household occupation (13 vs. 2) and predominance of unemployed men in job search situation (24 vs. 10) and reintegrated to work by judicial decision (8 vs. 2). There was also a correlation between pain and functional limitation (p < 0.01). There was a positive association of PPRA with the complaint of functional limitation and negative association with pain (p < 0.04). There was also a correlation between the sedentary lifestyle and the presence of PCMSO and PPRA (p < 0.04), and the absence of CAT in the companies (p < 0.001). It was concluded that the appearance or aggravation of osseous and articular shoulder pathologies in workers who have undertaken labor law suits seem to be associated with individual habits or inadequate labor practices. These data can help preventing the occurrence of these lesions by implementing local health promotion policies at work.

Keywords: work-related accidents, cross-sectional study, shoulder lesions, labor lawsuits

Procedia PDF Downloads 214
1309 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 187
1308 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 101
1307 Characterization of Phenolic Compounds from Carménère Wines during Aging with Oak Wood (Staves, Chips and Barrels)

Authors: E. Obreque-Slier, J. Laqui-Estaña, A. Peña-Neira, M. Medel-Marabolí

Abstract:

Wine is an important source of polyphenols. Red wines show important concentrations of nonflavonoid (gallic acid, ellagic acid, caffeic acid and coumaric acid) and flavonoid compounds [(+)-catechin, (-)-epicatechin, (+)-gallocatechin and (-)-epigallocatechin]. However, a significant variability in the quantitative and qualitative distribution of chemical constituents in wine has to be expected depending on an array of important factors, such as the varietal differences of Vitis vinifera and cultural practices. It has observed that Carménère grapes present a differential composition and evolution of phenolic compounds when compared to other varieties and specifically with Cabernet Sauvignon grapes. Likewise, among the cultural practices, the aging in contact with oak wood is a high relevance factor. Then, the extraction of different polyphenolic compounds from oak wood into wine during its ageing process produces both qualitative and quantitative changes. Recently, many new techniques have been introduced in winemaking. One of these involves putting new pieces of wood (oak chips or inner staves) into inert containers. It offers some distinct and previously unavailable flavour advantages, as well as new options in wine handling. To our best knowledge, there is not information about the behaviour of Carménère wines (Chilean emblematic cultivar) in contact with oak wood. In addition, the effect of aging time and wood product (barrels, chips or staves) on the phenolic composition in Carménère wines has not been studied. This study aims at characterizing the condensed and hydrolyzable tannins from Carménère wines during the aging with staves, chips and barrels from French oak wood. The experimental design was completely randomized with two independent assays: aging time (0-12 month) and different formats of wood (barrel, chips and staves). The wines were characterized by spectrophotometric (total tannins and fractionation of proanthocyanidins into monomers, oligomers and polymers) and HPLC-DAD (ellagitannins) analysis. The wines in contact with different products of oak wood showed a similar content of total tannins during the study, while the control wine (without oak wood) presented a lower content of these compounds. In addition, it was observed that the polymeric proanthocyanidin fraction was the most abundant, while the monomeric fraction was the less abundant fraction in all treatments in two sample. However, significative differences in each fractions were observed between wines in contact from barrel, chips, and staves in two sample dates. Finally, the wine from barrels presented the highest content of the ellagitannins from the fourth to the last sample date. In conclusion, the use of alternative formats of oak wood affects the chemical composition of wines during aging, and these enological products are an interesting alternative to contribute with tannins to wine.

Keywords: enological inputs, oak wood aging, polyphenols, red wine

Procedia PDF Downloads 154
1306 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine

Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski

Abstract:

Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: electric vehicle, power generator, range extender, Wankel engine

Procedia PDF Downloads 153
1305 Feasibility and Energy Efficiency Analysis of Chilled Water Radiant Cooling System of Office Apartment in Nigeria’s Tropical Climate City

Authors: Rasaq Adekunle Olabomi

Abstract:

More than 30% of the global building energy consumption is attributed to heating, ventilation and air-conditioning (HVAC) due to increasing urbanization and the need for more personal comfort. While heating is predominant in the temperate regions (especially during winter), comfort cooling is constantly needed in tropical regions such as Nigeria. This makes cooling a major contributor to the peak electrical load in the tropics. Meanwhile, the high solar energy availability in the tropical climate region presents a higher application potentials for solar thermal cooling systems; more so, the need for cooling mostly coincides with the solar energy availability. In addition to huge energy consumption, conventional (compressor type) air-conditioning systems mostly use refrigerants that are regarded as environmental unfriendly because of their ozone depletion potentials; this has made the alternative cooling systems to become popular in the present time. The better thermal capacity and less pumping power requirement of chilled water than chilled air has also made chilled water a preferred option over the chilled air cooling system. Radiant floor chilled water cooling is particularly is also considered suitable for spaces such as meeting room, seminar hall, auditorium, airport arrival and departure halls among others. This study did the analysis of the feasibility and energy efficiency of solar thermal chilled water for radiant flood cooling of an office apartment in a tropical climate city in Nigeria with a view to recommend its up-scaling. The analysis considered the weather parameters including available solar irradiance (kWh/m2-day) as well as the technical details of the solar thermal cooling systems to determine the feasibility. Project cost, its energy savings, emission reduction potentials and cost-to-benefits ration are used to analyze its energy efficiency as well as the viability of the cooling system. The techno-economic analysis of the proposed system, carried out using RETScreen software shows that its viability in but SWOT analysis of policy and institutional framework to promote solar energy utilization for the cooling systems shows weakness such as poor infrastructure and inadequate local capacity for technological development as major challenges.

Keywords: cooling load, absorption cooling system, coefficient of performance, radiant floor, cost saving, emission reduction

Procedia PDF Downloads 15
1304 Public Participation in Political Transformation: From the Coup D’etat in 2014 to the Events Leading up to the Proposed Election in 2018 in Thailand

Authors: Pataramon Satalak, Sakrit Isariyanon, Teerapong Puripanik

Abstract:

This article uses the recent events in Thailand as a case study for examining why democratic transition is necessary during political upheaval to ensure that the people’s power remains unaffected. After seizing power in May 2014, the military, backed by anti-government protestors, selected and established their own system to govern the country. They set up the National Council for Peace and Order (NCPO) which established a People’s Assembly, aiming to reach a compromise between the conflicting opinions of former, pro-government and anti-government protesters. It plans to achieve this through political reform before returning sovereign power to the people via an election in 2018. If a governmental authority is not representative of the people (e.g. a military government) it does not count as a legitimate government. During the last four years of military government, from May 2014 to January 2018, their rule of Thailand has been widely controversial, specifically regarding their commitment to democracy, human rights violations and their manipulation of the rule of law. Democratic legitimacy relies not only on established mechanisms for public participation (like referendums or elections) but also public participation based on accessible and educational reform (often via NGOs) to ensure that the free and fair will of the people can be expressed. Through their actions over the last three years, the Thai military government has damaged both of these components, impacting future public participation in politics. The authors make some observations about the specific actions the military government has taken to erode the democratic legitimacy of future public participation: the increasing dominance of military courts over civil courts; civil society’s limited involvement in political activities; the drafting of a new constitution and their attempt to master support through referenda and its consequence for delaying organic law-making process; the structure of the legislative powers (Senate and the members of parliament); and the control of people’s basic freedoms of expression, movement and assembly in political activities. One clear consequence of the military government’s specific actions over the last three years is the increased uncertainty amongst Thai people that their fundamental freedoms and political rights will be respected in the future. This will directly affect their participation in future democratic processes. The military government’s actions (e.g. their response to the UN representatives) will also have influenced potential international engagement in Thai civil society to help educate disadvantaged people about their rights, and their participation in the political arena. These actions challenge the democratic idea that there should be a checking and balancing of power between people and government. These examples provide evidence that a democratic transition is crucial during any process of political transformation.

Keywords: political tranformation, public participation, Thailand coup d'etat 2014, election 2018

Procedia PDF Downloads 147
1303 Autism and Work, From the Perception of People Inserted in the Work

Authors: Nilson Rogério Da Silva, Ingrid Casagrande, Isabela Chicarelli Amaro Santos

Abstract:

Introduction: People with Autism Spectrum Disorder (ASD) may face difficulties in social inclusion in different segments of society, especially in entering and staying at work. In Brazil, although there is legislation that equates it to the condition of disability, the number of people at work is still low. The United Nations estimates that more than 80 percent of adults with autism are jobless. In Brazil, the scenario is even more nebulous because there is no control and tracking of accurate data on the number of individuals with autism and how many of these are inserted in the labor market. Pereira and Goyos (2019) found that there is practically no scientific production about people with ASD in the labor market. Objective: To describe the experience of people with ASD inserted in the work, facilities and difficulties found in the professional exercise and the strategies used to maintain the job. Methodology: The research was approved by the Research Ethics Committee. As inclusion criteria for participation, the professional should accept to participate voluntarily, be over 18 years of age and have had some experience with the labor market. As exclusion criteria, being under 18 years of age and having never worked in a work activity. Participated in the research of 04 people with a diagnosis of ASD, aged 22 to 32 years. For data collection, an interview script was used that addressed: 1) General characteristics of the participants; 2) Family support; 3) School process; 4) Insertion in the labor market; 5) Exercise of professional activity; (6) Future and Autism; 7) Possible coping strategies. For the analysis of the data obtained, the full transcription of the interviews was performed and the technique of Content Analysis was performed. Results: The participants reported problems in different aspects: In the school environment: difficulty in social relationships, and Bullying. Lack of adaptation to the school curriculum and the structure of the classroom; In the Faculty: difficulty in following the activities, ealizar group work, meeting deadlines and establishing networking; At work: little adaptation in the work environment, difficulty in establishing good professional bonds, difficulty in accepting changes in routine or operational processes, difficulty in understanding veiled social rules. Discussion: The lack of knowledge about what disability is and who the disabled person is leads to misconceptions and negatives regarding their ability to work and in this context, people with disabilities need to constantly prove that they are able to work, study and develop as a human person, which can be classified as ableism. The adaptations and the use of technologies to facilitate the performance of people with ASD, although guaranteed in national legislation, are not always available, highlighting the difficulties and prejudice. Final Considerations: The entry and permanence of people with ASD at work still constitute a challenge to be overcome, involving changes in society in general, in companies, families and government agencies.

Keywords: autism spectrum disorder (ASD), work, disability, autism

Procedia PDF Downloads 76
1302 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 283
1301 Influence of Laser Treatment on the Growth of Sprouts of Different Wheat Varieties

Authors: N. Bakradze, T. Dumbadze, N. Gagelidze, L. Amiranashvili, A. D. L. Batako

Abstract:

Cereals are considered as a strategic product in human life and it demand is increasing with the growth of world population. There is always shortage of cereals in various areas of the globe. For example, Georgia own production meets only 15-20% of the demand for grain, despite the fact that the country is considered one of the main centers of wheat origin. In Georgia, there are 14 types of wheat and more than 150 subspecies, and 40 subspecies of common wheat. Increasing wheat production is important for the country. One of the ways to solve the problem is to develop and implement new, environmentally and economically acceptable technologies. Such technologies include pre-sowing treatment of seed with a laser and associative nitrogen-fixing of the Azospirillum brasilensse bacteria. In the region there are Dika and Lomtagora which are among the most common in Georgia. Dika is a frost-resistant wheat, with a high ability to adapt to the environment, resistant to falling and it is sown in highlands. Dicka excellent properties are due to its strong immunity to fungal diseases; Dicka grains are rich in protein and lysine. Lomtagora 126 differs with its winter and drought resistance, and, it has a great ability to germinate. Lomtagora is characterized by a strong root system and a high budding capacity. It is an early variety, fall-resistant, easy to thresh and suitable for mechanized harvesting with large and red grains. The plant is moderately resistant to fungal diseases. This paper presents some preliminary experimental results where, a continuous CO2 laser at a power of 25-40 W/cm2 was used to radiate grains at a flow rate of 10-15 cm/sec. The treatment was carried out on grains of the Triticum aestivum L. var. of Lutescens (local variety name - Lomtagora 126), and Triticum carthlicum Nevski (local variety name - Dika). Here the grains were treated with Azospirillum brasilensse isolate (108-109 CFU / ml), which was isolated from the rhizosphere of wheat. It was observed that the germination of the wheat was not significantly influenced by either laser or bacteria treatment. In the case of the variety Lomtagora 126, when irradiated at an angle of 90°, it slightly improved the growth within 38 days of sawing, and in the case of irradiation at an angle of 90°+1, by 23%. The treatment of seeds with Azospirillum brazilense in both irradiated and non-irradiated variants led to an improvement in the growth of ssprouts. However, in the case of treatment with azospiril alone - by 22%, and with joint treatment of seeds with azospiril and irradiation - by 29%. In the case of the Dika wheat, the irradiation only led to an increase in growth by 8-9%, and the combine treatment of seeds with azospiril and irradiation - by 10-15%, in comparison with the control. Thus, the combine treatment of wheat of different varieties provided the best effect on the growth. Acknowledgment: This work was supported by Shota Rustaveli National Science Foundation of Georgia (SRNSFG) (Grant number CARYS 19-573)

Keywords: laser treatment, Azospirillum brasilensse, seeds, wheat varieties, Lomtagora, Dika

Procedia PDF Downloads 141
1300 Shale Gas Accumulation of Over-Mature Cambrian Niutitang Formation Shale in Structure-Complicated Area, Southeastern Margin of Upper Yangtze, China

Authors: Chao Yang, Jinchuan Zhang, Yongqiang Xiong

Abstract:

The Lower Cambrian Niutitang Formation shale (NFS) deposited in the marine deep-shelf environment in Southeast Upper Yangtze (SUY), possess excellent source rock basis for shale gas generation, however, it is currently challenged by being over-mature with strong tectonic deformations, leading to much uncertainty of gas-bearing potential. With emphasis on the shale gas enrichment of the NFS, analyses were made based on the regional gas-bearing differences obtained from field gas-desorption testing of 18 geological survey wells across the study area. Results show that the NFS bears low gas content of 0.2-2.5 m³/t, and the eastern region of SUY is higher than the western region in gas content. Moreover, the methane fraction also presents the similar regional differentiation with the western region less than 10 vol.% while the eastern region generally more than 70 vol.%. Through the analysis of geological theory, the following conclusions are drawn: Depositional environment determines the gas-enriching zones. In the western region, the Dengying Formation underlying the NFS in unconformity contact was mainly plateau facies dolomite with caves and thereby bears poor gas-sealing ability. Whereas the Laobao Formation underling the NFS in eastern region was a set of siliceous rocks of shelf-slope facies, which can effectively prevent the shale gas from escaping away from the NFS. The tectonic conditions control the gas-enriching bands in the SUY, which is located in the fold zones formed by the thrust of the Southern China plate towards to the Sichuan Basin. Compared with the western region located in the trough-like folds, the eastern region at the fold-thrust belts was uplifted early and deformed weakly, resulting in the relatively less mature level and relatively slight tectonic deformation of the NFS. Faults determine whether shale gas can be accumulated in large scale. Four deep and large normal faults in the study area cut through the Niutitang Formation to the Sinian strata, directly causing a large spillover of natural gas in the adjacent areas. For the secondary faults developed within the shale formation, the reverse faults generally have a positive influence on the shale accumulation while the normal faults perform the opposite influence. Overall, shale gas enrichment targets of the NFS, are the areas with certain thickness of siliceous rocks at the basement of the Niutitang Formation, and near the margin of the paleouplift with less developed faults. These findings provide direction for shale gas exploration in South China, and also provide references for the areas with similar geological conditions all over the world.

Keywords: over-mature marine shale, shale gas accumulation, structure-complicated area, Southeast Upper Yangtze

Procedia PDF Downloads 139
1299 Factors Affecting the Success of Premarital Screening Services in Middle Eastern Countries

Authors: Wafa Al Jabri

Abstract:

Background: In Middle Eastern Countries (MECs), there is a high prevalence of genetic blood disorders (GBDs), particularly sickle cell disease and thalassemia. The GBDs are considered a major public health concern that place a huge burden to individuals, families, communities, and health care systems. The high rates of consanguineous marriages, along with the unacceptable termination of at-risk pregnancy in MECs, reduce the possible solutions to control the high prevalence of GBDs. Since the early 1970s, most of MECs have started introducing premarital screening services (PSS) as a preventive measure to identify the asymptomatic carriers of GBDs and to provide genetic counseling to help couples plan for healthy families; yet, the success rate of PSS is very low. Purpose: This paper aims to highlight the factors that affect the success of PSS in MECs. Methods: An integrative review of articles located in CINAHL, PubMed, SCOPUS, and MedLine was carried out using the following terms: “premarital screening,” “success,” “effectiveness,” and “ genetic blood disorders”. Second, a hand search of the reference lists and Google searches were conducted to find studies that did not exist in the primary database searches. Only studies which are conducted in MECs and published after 2010 were included. Studies that were not published in English were excluded. Results: Eighteen articles were included in the review. The results showed that PSS in most of the MECs was successful in achieving its objective of identifying high-risk marriages; however, the service failed to meet its ultimate goal of reducing the prevalence of GBDs. Various factors seem to hinder the success of PSS, including poor public awareness, late timing of the screening, culture and social stigma, lack of prenatal diagnosis services and therapeutic abortion, emotional factors, religious beliefs, and lack of genetic counseling services. However, poor public awareness, late timing of the screening, religious misbeliefs, and the lack of adequate counseling services were the most common barriers identified. Conclusion and Implications: The review help in providing a framework for an effective preventive measure to reduce the prevalence of GBDs in MECS. This framework focuses primarily in overcoming the identified barriers by providing effective health education programs in collaboration with religious leaders, offering the screening test to young adults at an earlier stage, and tailoring the genetic counseling to consider people’s values, beliefs, and preferences.

Keywords: premarital screening, middle east, genetic blood disorders, factors

Procedia PDF Downloads 80
1298 Reasons for Food Losses and Waste in Basic Production of Meat Sector in Poland

Authors: Sylwia Laba, Robert Laba, Krystian Szczepanski, Mikolaj Niedek, Anna Kaminska-Dworznicka

Abstract:

Meat and its products are considered food products, having the most unfavorable effect on the environment that requires rational management of these products and waste, originating throughout the whole chain of manufacture, processing, transport, and trade of meat. From the economic and environmental viewpoints, it is important to limit the losses and food wastage and the food waste in the whole meat sector. The link to basic production includes obtaining raw meat, i.e., animal breeding, management, and transport of animals to the slaughterhouse. Food is any substance or product, intended to be consumed by humans. It was determined (for the needs of the present studies) when the raw material is considered as a food. It is the moment when the animals are prepared to loading with the aim to be transported to a slaughterhouse and utilized for food purposes. The aim of the studies was to determine the reasons for loss generation in the basic production of the meat sector in Poland during the years 2017 – 2018. The studies on food losses and waste in the meat sector in basic production were carried out in two areas: red meat i.e., pork and beef and poultry meat. The studies of basic production were conducted in the period of March-May 2019 at the territory of the whole country on a representative trial of 278 farms, including 102 pork production, 55–beef production, and 121 poultry meat production. The surveys were carried out with the utilization of questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted direct questionnaire interviews. Research results indicate that it is followed that any losses were not recorded during the preparation, loading, and transport of the animals to the slaughterhouse in 33% of the visited farms. In the farms where the losses were indicated, the crushing and suffocations, occurring during the production of pigs, beef cattle and poultry, were the main reasons for these losses. They constituted ca. 40% of the reported reasons. The stress generated by loading and transport caused 16 – 17% (depending on the season of the year) of the loss reasons. In the case of poultry production, in 2017, additionally, 10.7% of losses were caused by inappropriate conditions of loading and transportation, while in 2018 – 11.8%. The diseases were one of the reasons for the losses in pork and beef production (7% of the losses). The losses and waste, generated during livestock production and in meat processing and trade cannot be managed or recovered. They have to be disposed of. It is, therefore, important to prevent and minimize the losses throughout the whole production chain. It is possible to introduce the appropriate measures, connected mainly with the appropriate conditions and methods of animal loading and transport.

Keywords: food losses, food waste, livestock production, meat sector

Procedia PDF Downloads 140
1297 A Q-Methodology Approach for the Evaluation of Land Administration Mergers

Authors: Tsitsi Nyukurayi Muparari, Walter Timo De Vries, Jaap Zevenbergen

Abstract:

The nature of Land administration accommodates diversity in terms of both spatial data handling activities and the expertise involved, which supposedly aims to satisfy the unpredictable demands of land data and the diverse demands of the customers arising from the land. However, it is known that strategic decisions of restructuring are in most cases repelled in favour of complex structures that strive to accommodate professional diversity and diverse roles in the field of Land administration. Yet despite of this widely accepted knowledge, there is scanty theoretical knowledge concerning the psychological methodologies that can extract the deeper perceptions from the diverse spatial expertise in order to explain the invisible control arm of the polarised reception of the ideas of change. This paper evaluates Q methodology in the context of a cadastre and land registry merger (under one agency) using the Swedish cadastral system as a case study. Precisely, the aim of this paper is to evaluate the effectiveness of Q methodology towards modelling the diverse psychological perceptions of spatial professionals who are in a widely contested decision of merging the cadastre and land registry components of Land administration using the Swedish cadastral system as a case study. An empirical approach that is prescribed by Q methodology starts with the concourse development, followed by the design of statements and q sort instrument, selection of the participants, the q-sorting exercise, factor extraction by PQMethod and finally narrative development by logic of abduction. The paper uses 36 statements developed from a dominant competing value theory that stands out on its reliability and validity, purposively selects 19 participants to do the Qsorting exercise, proceeds with factor extraction from the diversity using varimax rotation and judgemental rotation provided by PQMethod and effect the narrative construction using the logic abduction. The findings from the diverse perceptions from cadastral professionals in the merger decision of land registry and cadastre components in Sweden’s mapping agency (Lantmäteriet) shows that focus is rather inclined on the perfection of the relationship between the legal expertise and technical spatial expertise. There is much emphasis on tradition, loyalty and communication attributes which concern the organisation’s internal environment rather than innovation and market attributes that reveals customer behavior and needs arising from the changing humankind-land needs. It can be concluded that Q methodology offers effective tools that pursues a psychological approach for the evaluation and gradations of the decisions of strategic change through extracting the local perceptions of spatial expertise.

Keywords: cadastre, factor extraction, land administration merger, land registry, q-methodology, rotation

Procedia PDF Downloads 189
1296 Exploring Coexisting Opportunity of Earthquake Risk and Urban Growth

Authors: Chang Hsueh-Sheng, Chen Tzu-Ling

Abstract:

Earthquake is an unpredictable natural disaster and intensive earthquakes have caused serious impacts on social-economic system, environmental and social resilience, and further increase vulnerability. Due to earthquakes do not kill people, buildings do. When buildings located nearby earthquake-prone areas and constructed upon poorer soil areas might result in earthquake-induced ground damage. In addition, many existing buildings built before any improved seismic provisions began to be required in building codes and inappropriate land usage with highly dense population might result in much serious earthquake disaster. Indeed, not only do earthquake disaster impact seriously on urban environment, but urban growth might increase the vulnerability. Since 1980s, ‘Cutting down risks and vulnerability’ has been brought up in both urban planning and architecture and such concept has way beyond retrofitting of seismic damages, seismic resistance, and better anti-seismic structures, and become the key action on disaster mitigation. Land use planning and zoning are two critical non-structural measures on controlling physical development while it is difficult for zoning boards and governing bodies restrict development of questionable lands to uses compatible with the hazard without credible earthquake loss projection. Therefore, identifying potential earthquake exposure, vulnerability people and places, and urban development areas might become strongly supported information for decision makers. Taiwan locates on the Pacific Ring of Fire where a seismically active zone is. Some of the active faults have been found close by densely populated and highly developed built environment in the cities. Therefore, this study attempts to base on the perspective of carrying capacity and draft out micro-zonation according to both vulnerability index and urban growth index while considering spatial variances of multi factors via geographical weighted principle components (GWPCA). The purpose in this study is to construct supported information for decision makers on revising existing zoning in high-risk areas for a more compatible use and the public on managing risks.

Keywords: earthquake disaster, vulnerability, urban growth, carrying capacity, /geographical weighted principle components (GWPCA), bivariate spatial association statistic

Procedia PDF Downloads 254
1295 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 72
1294 Leadership Styles and Adoption of Risk Governance in Insurance and Energy Industry: A Comparative Case Study

Authors: Ruchi Agarwal

Abstract:

In today’s world, companies are operating in dynamic, uncertain and ambiguous business environments. Globally, more companies are failing due to Environmental, Social and Governance (ESG) factors than ever. Corporate governance and risk management are intertwined in nature. For decades, corporate governance and risk management have been influenced by internal and external factors. Three schools of thought have influenced risk governance for decades: Agency theory, Contingency theory, and Institutional theory. Agency theory argues that agents have interests conflicting with principal interests and the information problem. Contingency theory suggests that risk management adoption is influenced by internal and external factors, while Institutional theory suggests that organizations legitimize risk management with regulators, competitors, and professional bodies. The conflicting objectives of theories have created problems for executives in organizations in the adoption of Risk Governance. So far, there are many studies that discussed risk culture and the role of actors in risk governance, but there are rare studies discussing the role of risk culture in the adoption of risk governance from a leadership style perspective. This study explores the adoption of risk governance in two contrasting industries, such as the Insurance and energy business, to understand whether risk governance is influenced by internal/external factors or whether risk culture is influenced by leaders. We draw empirical evidence by comparing the cases of an Indian insurance company and a renewable energy-based firm in India. We interviewed more than 20 senior executives of companies and collected annual reports, risk management policies, and more than 10 PPTs and other reports from 2017 to 2024. We visited the company for follow-up questions several times. The findings of my research revealed that both companies have used risk governance for strategic renewal of the company. Insurance companies use a transactional leadership style based on performance and reward for improving risk, while energy companies use rather symbolic management to make debt restructuring meaningful for stakeholders. Overall, both companies turned from loss-making to profitable ones in a few years. This comparative study highlights the role of different leadership styles in the adoption of risk governance. The study is also distinct as previous research rarely studied risk governance in two contrasting industries in reference to leadership styles.

Keywords: leadership style, corporate governance, risk management, risk culture, strategic renewal

Procedia PDF Downloads 40
1293 The Neuropsychology of Obsessive Compulsion Disorder

Authors: Mia Bahar, Özlem Bozkurt

Abstract:

Obsessive-compulsive disorder (OCD) is a typical, persistent, and long-lasting mental health condition in which a person experiences uncontrollable, recurrent thoughts (or "obsessions") and/or activities (or "compulsions") that they feel compelled to engage in repeatedly. Obsessive-compulsive disorder is both underdiagnosed and undertreated. It frequently manifests in a variety of medical settings and is persistent, expensive, and burdensome. Obsessive-compulsive neurosis was long believed to be a condition that offered valuable insight into the inner workings of the unconscious mind. Obsessive-compulsive disorder is now recognized as a prime example of a neuropsychiatric condition susceptible to particular pharmacotherapeutic and psychotherapy therapies and mediated by pathology in particular neural circuits. An obsessive-compulsive disorder which is called OCD, usually has two components, one cognitive and the other behavioral, although either can occur alone. Obsessions are often repetitive and intrusive thoughts that invade consciousness. These obsessions are incredibly hard to control or dismiss. People who have OCD often engage in rituals to reduce anxiety associated with intrusive thoughts. Once the ritual is formed, the person may feel extreme relief and be free from anxiety until the thoughts of contamination intrude once again. These thoughts are strengthened through a manifestation of negative reinforcement because they allow the person to avoid anxiety and obscurity. These thoughts are described as autogenous, meaning they most likely come from nowhere. These unwelcome thoughts are related to actions which we can describe as Thought Action Fusion. The thought becomes equated with an action, such as if they refuse to perform the ritual, something bad might happen, and so people perform the ritual to escape the intrusive thought. In almost all cases of OCD, the person's life gets extremely disturbed by compulsions and obsessions. Studies show OCD is an estimated 1.1% prevalence, making it a challenging issue with high co-morbidities with other issues like depressive episodes, panic disorders, and specific phobias. The first to reveal brain anomalies in OCD were numerous CT investigations, although the results were inconsistent. A few studies have focused on the orbitofrontal cortex (OFC), anterior cingulate gyrus (AC), and thalamus, structures also implicated in the pathophysiology of OCD by functional neuroimaging studies, but few have found consistent results. However, some studies have found abnormalities in the basal ganglion. There have also been some discussions that OCD might be genetic. OCD has been linked to families in studies of family aggregation, and findings from twin studies show that this relationship is somewhat influenced by genetic variables. Some Research has shown that OCD is a heritable, polygenic condition that can result from de novo harmful mutations as well as common and unusual variants. Numerous studies have also presented solid evidence in favor of a significant additive genetic component to OCD risk, with distinct OCD symptom dimensions showing both common and individual genetic risks.

Keywords: compulsions, obsessions, neuropsychiatric, genetic

Procedia PDF Downloads 63
1292 Study of the Impact of Quality Management System on Chinese Baby Dairy Product Industries

Authors: Qingxin Chen, Liben Jiang, Andrew Smith, Karim Hadjri

Abstract:

Since 2007, the Chinese food industry has undergone serious food contamination in the baby dairy industry, especially milk powder contamination. One of the milk powder products was found to contain melamine and a significant number (294,000) of babies were affected by kidney stones. Due to growing concerns among consumers about food safety and protection, and high pressure from central government, companies must take radical action to ensure food quality protection through the use of an appropriate quality management system. Previously, though researchers have investigated the health and safety aspects of food industries and products, quality issues concerning food products in China have been largely over-looked. Issues associated with baby dairy products and their quality issues have not been discussed in depth. This paper investigates the impact of quality management systems on the Chinese baby dairy product industry. A literature review was carried out to analyse the use of quality management systems within the Chinese milk power market. Moreover, quality concepts, relevant standards, laws, regulations and special issues (such as Melamine, Flavacin M1 contamination) have been analysed in detail. A qualitative research approach is employed, whereby preliminary analysis was conducted by interview, and data analysis based on interview responses from four selected Chinese baby dairy product companies was carried out. Through the analysis of literature review and data findings, it has been revealed that for quality management system that has been designed by many practitioners, many theories, models, conceptualisation, and systems are present. These standards and procedures should be followed in order to provide quality products to consumers, but the implementation is lacking in the Chinese baby dairy industry. Quality management systems have been applied by the selected companies but the implementation still needs improvement. For instance, the companies have to take measures to improve their processes and procedures with relevant standards. The government need to make more interventions and take a greater supervisory role in the production process. In general, this research presents implications for the regulatory bodies, Chinese Government and dairy food companies. There are food safety laws prevalent in China but they have not been widely practiced by companies. Regulatory bodies must take a greater role in ensuring compliance with laws and regulations. The Chinese government must also play a special role in urging companies to implement relevant quality control processes. The baby dairy companies not only have to accept the interventions from the regulatory bodies and government, they also need to ensure that production, storage, distribution and other processes will follow the relevant rules and standards.

Keywords: baby dairy product, food quality, milk powder contamination, quality management system

Procedia PDF Downloads 469
1291 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading

Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger

Abstract:

Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.

Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels

Procedia PDF Downloads 114
1290 Diversified Farming and Agronomic Interventions Improve Soil Productivity, Soybean Yield and Biomass under Soil Acidity Stress

Authors: Imran, Murad Ali Rahat

Abstract:

One of the factors affecting crop production and nutrient availability is acidic stress. The most important element decreasing under acidic stress conditions is phosphorus deficiency, which results in stunted growth and yield because of inefficient nutrient cycling. At the Agriculture Research Institute Mingora Swat, Pakistan, tests were carried out for the first time throughout the course of two consecutive summer seasons in 2016 (year 1) and 2017 (year 2) with the goal of increasing crop productivity and nutrient availability under acidic stress. Three organic supplies (peach nano-black carbon, compost, and dry-based peach wastes), three phosphorus rates, and two advantageous microorganisms (Trichoderma and PSB) were incorporated in the experimental treatments. The findings showed that, in conditions of acid stress, peach organic sources had a significant impact on yield and yield components. The application of nano-black carbon produced the greatest thousand seed weight of 164.6 g among organic sources, however the use of phosphorus solubilizing bacteria (PSB) for seed inoculation increased the thousand seed weight of beneficial microbes when compared to Trichoderma soil application. The thousand seed weight was significantly impacted by the quantities of phosphorus. The treatment of 100 kg P ha-1 produced the highest thousand seed weight (167.3 g), which was followed by 75 kg P ha-1 (162.5 g). Compost amendments provided the highest seed yield (2,140 kg ha-1) and were comparable to the application of nano-black carbon (2,120 kg ha-1). With peach residues, the lowest seed output (1,808 kg ha-1) was observed.Compared to seed inoculation with PSB (1,913 kg ha-1), soil treatment with Trichoderma resulted in the maximum seed production (2,132 kg ha-1). Applying phosphorus to the soybean crop greatly increased its output. The highest seed yield (2,364 kg ha-1) was obtained with 100 kg P ha-1, which was comparable to 75 kg P ha-1 (2,335 kg ha-1), while the lowest seed yield (1,569 kg ha-1) was obtained with 50 kg P ha-1. The average values showed that compared to control plots (3.3 g kg-1), peach organic sources produced greatest SOC (10.0 g kg-1). Plots with treated soil had a maximum soil P of 19.7 mg kg-1, while plots under stress had a maximum soil P of 4.8 mg kg-1. While peach compost resulted in the lowest soil P levels, peach nano-black carbon yielded the highest soil P levels (21.6 mg kg-1). Comparing beneficial bacteria with PSB to Trichoderma (18.3 mg/kg-1), the former also shown an improvement in soil P (21.1 mg kg-1). Regarding P treatments, the application of 100 kg P per ha produced significantly higher soil P values (26.8 mg /kg-1), followed by 75 kg P per ha (18.3 mg /kg-1), and 50 kg P ha-1 produced the lowest soil P values (14.1 mg /kg-1). Comparing peach wastes and compost to peach nano-black carbon (13.7 g kg-1), SOC rose. In contrast to PSB (8.8 g kg-1), soil-treated Trichoderma was shown to have a greater SOC (11.1 g kg-1). Higher among the P levels.

Keywords: acidic stress, trichoderma, beneficial microbes, nano-black carbon, compost, peach residues, phosphorus, soybean

Procedia PDF Downloads 70
1289 Effects of Narghile Smoking in Tongue, Trachea and Lung

Authors: Sarah F. M. Pilati, Carolina S. Flausino, Guilherme F. Hoffmeister, Davi R. Tames, Telmo J. Mezadri

Abstract:

The effects that may be related to narghile smoking in the tissues of the oral cavity, trachea and lung and associated inflammation has been the question raised lately. The objective of this study was to identify histopathological changes and the presence of inflammation through the exposure of mice to narghile smoking through a whole-body study. The animals were divided in 4 groups with 5 animals in each group, being: one control group, one with 7 days of exposure, 15 days and the last one with 30 days. The animals were exposed to the conventional hookah smoke from Mizo brand with 0.5% percentage of unwashed tobacco and the EcOco brand coconut fiber having a dimension of 2cm × 2cm. The duration of the session was 30 minutes / day per 7, 15 and 30 days. The tobacco smoke concentration at which test animals were exposed was 35 ml every two seconds while the remaining 58 seconds were pure air. Afterward, the mice were sacrificed and submitted to histological evaluation through slices. It was found in the tongue of the 7-day group the presence in epithelium areas with acanthosis, hyperkeratosis and epithelial projections. In-depth, more intense inflammation was observed. All alteration processes increased significantly as the days of exposure increased. In trachea, with the 7-day group, there was a decrease in thickening of the pseudostratified epithelium and a slight decrease in lashes, giving rise to the metaplasia process, a process that was established in the 31-day sampling when the epithelium became stratified. In the conjunctive tissue, it was observed the presence of defense cells and formation of new vessels, evidencing the chronic inflammatory process, which decreased in the course of the samples due to the deposition of collagen fibers as seen in the 15 and 31 days groups. Among the structures of the lung, the study focused on the bronchioles and alveoli. From the 7-day group, intra-alveolar septum thickness increased, alveolar space decreased, inflammatory infiltrate with mononuclear and defense cells and new vessels formation were observed, increasing the number of red blood cells in the region. The results showed that with the passing of the days a progressive increase of the signs of changes in the region was observed, a factor that shows that narghile smoking stimulates alterations mainly in the alveoli (place where gas exchanges occur that should not present alterations) calling attention to the harmful and aggressive effect of narghile smoking. These data also highlighted the harmful effect of smoking, since the presence of acanthosis, hyperkeratosis, epithelial projections and inflammation evidences the cellular alteration process for the tongue tissue protection. Also, the narghile smoking stimulates both epithelial and inflammatory changes in the trachea, in addition to a process of metaplasia, a factor that reinforces the harmful effect and the carcinogenic potential of the narghile smoking.

Keywords: metaplasia, inflammation, pathological constriction, hyperkeratosis

Procedia PDF Downloads 171
1288 Extent of Fruit and Vegetable Waste at Wholesaler Stage of the Food Supply Chain in Western Australia

Authors: P. Ghosh, S. B. Sharma

Abstract:

The growing problem of food waste is causing unacceptable economic, environmental and social impacts across the globe. In Australia, food waste is estimated at about AU$8 billion per year; however, information on the extent of wastage at different stages of the food value chain from farm to fork is very limited. This study aims to identify causes for and extent of food waste at wholesaler stage of the food value chain in the state of Western Australia. It also explores approaches applied to reduce and utilize food waste by the wholesalers. The study was carried out at Perth city market in Caning Vale, the main wholesale distribution centre for fruits and vegetables in Western Australia. A survey questionnaire was prepared and shared with 51 wholesalers and their responses to 10 targeted questions on quantity of produce (fruits and vegetables) delivery received and further supplied, reasons for waste generation and innovations applied or being considered to reduce and utilize food waste. Data were computed using the Statistical Package for the Social Sciences (SPSS version 21). Among the wholesalers 52% were primary wholesalers (buy produce directly from growers) and 48% were secondary wholesalers (buy produce in bulk from major wholesalers and supply to the local retail market, caterers, and customers with specific requirements). Average fruit and vegetable waste was 180 Kilogram per week per primary wholesaler and 30 Kilogram per secondary wholesaler. Based on this survey, the fruit and vegetable waste at wholesaler stage was estimated at about 286 tonnes per year. The secondary wholesalers distributed pre-ordered commodities, which minimized the potential to cause waste. Non-parametric test (Mann Whitney test) was carried out to assess contributions of wholesalers to waste generation. Over 56% of secondary wholesalers generally had nothing to bin as waste. Pearson’s correlation coefficient analysis showed positive correlation (r = 0.425; P=0.01) between the quantity of produce received and waste generated. Low market demand was the predominant reason identified by the wholesalers for waste generation. About a third of the wholesalers suggested that high cosmetic standards for fruits and vegetables - appearance, shape, and size - should be relaxed to reduce waste. Donation of unutilized fruits and vegetables to charity was overwhelmingly (95%) considered as one of the best options for utilization of discarded produce. The extent of waste at other stages of fruit and vegetable supply chain is currently being studied.

Keywords: food waste, fruits and vegetables, supply chain, waste generation

Procedia PDF Downloads 308
1287 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip

Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar

Abstract:

Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.

Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation

Procedia PDF Downloads 179
1286 Integration of the Electro-Activation Technology for Soy Meal Valorization

Authors: Natela Gerliani, Mohammed Aider

Abstract:

Nowadays, the interest of using sustainable technologies for protein extraction from underutilized oilseeds is growing. Currently, a major disposal problem for the oil industry is by-products of plant food processing such as soybean meal. That is why valorization of soybean meal is important for the oil industry since it contains high-quality proteins and other valuable components. Generally, soybean meal is used in livestock and poultry feed but is rarely used in human feed. Though chemical composition of this meal compensate nutritional deficiency and can be used to balance protein in human food. Regarding the efficiency of soybean meal valorization, extraction is a key process for obtaining enriched protein ingredient, which can be incorporated into the food matrix. However, most of the food components such as proteins extracted from oilseeds by-products imply the utilization of organic and inorganic chemicals (e.g. acids, bases, TCA-acetone) having a significant environmental impact. In a context of sustainable production, the use of an electro-activation technology seems to be a good alternative. Indeed, the electro-activation technology requires only water, food grade salt and electricity as main materials. Moreover, this innovative technology helps to avoid special equipment and trainings for workers safety as well as transport and storage of hazardous materials. Electro-activation is a technology based on applied electrochemistry for the generation of acidic and alkaline solutions on the basis of the oxidation-reduction reactions that occur at the vicinity electrode/solution interfaces. It is an eco-friendly process that can be used to replace the conventional acidic and alkaline extraction. In this research, the electro-activation technology for protein extraction from soybean meal was carried out in the electro-activation reactor. This reactor consists of three compartments separated by cation and anion exchange membranes that allow creating non-contacting acidic and basic solutions. Different current intensities (150 mA, 300 mA and 450 mA) and treatment durations (10 min, 30 min and 50 min) were tested. The results showed that the extracts obtained by the electro-activation method have good quality in comparison to conventional extracts. For instance, extractability obtained with electro-activation method was 55% whereas with the conventional method it was only 36%. Moreover, a maximum protein quantity of 48 % in the extract was obtained with the electro-activation technology comparing to the maximum amount of protein obtained by conventional extraction of 41 %. Hence, the environmentally sustainable electro-activation technology seems to be a promising type of protein extraction that can replace conventional extraction technology.

Keywords: by-products, eco-friendly technology, electro-activation, soybean meal

Procedia PDF Downloads 223