Search results for: generating sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2190

Search results for: generating sets

90 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations

Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos

Abstract:

Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.

Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest

Procedia PDF Downloads 164
89 Family Firm Internationalization: Identification of Alternative Success Pathways

Authors: Sascha Kraus, Wolfgang Hora, Philipp Stieg, Thomas Niemand, Ferdinand Thies, Matthias Filser

Abstract:

In most countries, small and medium-sized enterprises (SME) are the backbone of the economy due to their impact on job creation, innovation and wealth creation. Moreover, the ongoing globalization makes it inevitable – even for SME that traditionally focused on their domestic markets – to internationalize their business activities to realize further growth and survive in international markets. Thus, internationalization has become one of the most common growth strategies for SME and has received increasing scholarly attention over the last two decades. One the downside internationalization can be also regarded as the most complex strategy that a firm can undertake. Particularly for family firms, that are often characterized by limited financial capital, a risk-averse nature and limited growth aspirations, it could be argued that family firms are more likely to face greater challenges when taking the pathway to internationalization. Especially the triangulation of family, ownership, and management (so-called ‘familiness’) manifests in a unique behavior and decision-making process which is often characterized by the importance given to noneconomic goals and distinguishes a family firm from other businesses. Taking this into account, the concept of socio-emotional wealth (SEW) has been evolved to describe the behavior of family firms. In order to investigate how different internal and external firm characteristics shape internationalization success of family firms, we drew on a sample consisting of 297 small and medium-sized family firms from Germany, Austria, Switzerland, and Liechtenstein. Thus, we include SEW as essential family firm characteristic and added the two major intra-organizational characteristics, entrepreneurial orientation (EO), absorptive capacity (AC) as well as collaboration intensity (CI) and relational knowledge (RK) as two major external network characteristics. Based on previous research we assume that these characteristics are important to explain internationalization success of family firm SME. Regarding the data analysis, we applied a Fuzzy Set Qualitative Comparative Analysis (fsQCA), an approach that allows identifying configurations of firm characteristics, specifically used to study complex causal relationships where traditional regression techniques reach their limits. Results indicate that several combinations of these family firm characteristics can lead to international success, with no permanently required key characteristic. Instead, there are many roads to walk down for family firms to achieve internationalization success. Consequently, our data states that family owned SME are heterogeneous and internationalization is a complex and dynamic process. Results further show that network related characteristics occur in all sets, thus represent an essential element in the internationalization process of family owned SME. The contribution of our study is twofold, as we investigate different forms of international expansion for family firms and how to improve them. First, we are able to broaden the understanding of the intersection between family firm and SME internationalization with respect to major intra-organizational and network-related variables. Second, from a practical perspective, we offer family firm owners a basis for setting up internal capabilities to achieve international success.

Keywords: entrepreneurial orientation, family firm, fsQCA, internationalization, socio-emotional wealth

Procedia PDF Downloads 228
88 Evaluation of Nanoparticle Application to Control Formation Damage in Porous Media: Laboratory and Mathematical Modelling

Authors: Gabriel Malgaresi, Sara Borazjani, Hadi Madani, Pavel Bedrikovetsky

Abstract:

Suspension-Colloidal flow in porous media occurs in numerous engineering fields, such as industrial water treatment, the disposal of industrial wastes into aquifers with the propagation of contaminants and low salinity water injection into petroleum reservoirs. The main effects are particle mobilization and captured by the porous rock, which can cause pore plugging and permeability reduction which is known as formation damage. Various factors such as fluid salinity, pH, temperature, and rock properties affect particle detachment. Formation damage is unfavorable specifically near injection and production wells. One way to control formation damage is pre-treatment of the rock with nanoparticles. Adsorption of nanoparticles on fines and rock surfaces alters zeta-potential of the surfaces and enhances the attachment force between the rock and fine particles. The main objective of this study is to develop a two-stage mathematical model for (1) flow and adsorption of nanoparticles on the rock in the pre-treatment stage and (2) fines migration and permeability reduction during the water production after the pre-treatment. The model accounts for adsorption and desorption of nanoparticles, fines migration, and kinetics of particle capture. The system of equations allows for the exact solution. The non-self-similar wave-interaction problem was solved by the Method of Characteristics. The analytical model is new in two ways: First, it accounts for the specific boundary and initial condition describing the injection of nanoparticle and production from the pre-treated porous media; second, it contains the effect of nanoparticle sorption hysteresis. The derived analytical model contains explicit formulae for the concentration fronts along with pressure drop. The solution is used to determine the optimal injection concentration of nanoparticle to avoid formation damage. The mathematical model was validated via an innovative laboratory program. The laboratory study includes two sets of core-flood experiments: (1) production of water without nanoparticle pre-treatment; (2) pre-treatment of a similar core with nanoparticles followed by water production. Positively-charged Alumina nanoparticles with the average particle size of 100 nm were used for the rock pre-treatment. The core was saturated with the nanoparticles and then flushed with low salinity water; pressure drop across the core and the outlet fine concentration was monitored and used for model validation. The results of the analytical modeling showed a significant reduction in the fine outlet concentration and formation damage. This observation was in great agreement with the results of core-flood data. The exact solution accurately describes fines particle breakthroughs and evaluates the positive effect of nanoparticles in formation damage. We show that the adsorbed concentration of nanoparticle highly affects the permeability of the porous media. For the laboratory case presented, the reduction of permeability after 1 PVI production in the pre-treated scenario is 50% lower than the reference case. The main outcome of this study is to provide a validated mathematical model to evaluate the effect of nanoparticles on formation damage.

Keywords: nano-particles, formation damage, permeability, fines migration

Procedia PDF Downloads 606
87 Blood Lipid Management: Combined Treatment with Hydrotherapy and Ozone Bubbles Bursting in Water

Authors: M. M. Wickramasinghe

Abstract:

Cholesterol and triglycerides are lipids, mainly essential to maintain the cellular structure of the human body. Cholesterol is also important for hormone production, vitamin D production, proper digestion functions, and strengthening the immune system. Excess fats in the blood circulation, known as hyperlipidemia, become harmful leading to arterial clogging and causing atherosclerosis. Aim of this research is to develop a treatment protocol to efficiently break down and maintain circulatory lipids by improving blood circulation without strenuous physical exercises while immersed in a tub of water. To achieve the target of strong exercise effect, this method involves generating powerful ozone bubbles to spin, collide, and burst in the water. Powerful emission of air into water is capable of transferring locked energy of the water molecules and releasing energy. This method involves water and air-based impact generated by pumping ozone at the speed of 46 lts/sec with a concentration of 0.03-0.05 ppt according to safety standards of The Federal Institute for Drugs and Medical Devices, BfArM, Germany. The direct impact of ozone bubbles on the muscular system and skin becomes the main target and is capable of increasing the heart rate while immersed in water. A total time duration of 20 minutes is adequate to exert a strong exercise effect, improve blood circulation, and stimulate the nervous and endocrine systems. Unstable ozone breakdown into oxygen release onto the surface of the water giving additional benefits and supplying high-quality air rich in oxygen required to maintain efficient metabolic functions. The breathing technique was introduced to improve the efficiency of lung functions and benefit the air exchange mechanism. The temperature of the water is maintained at 39c to 40c to support arterial dilation and enzyme functions and efficiently improve blood circulation to the vital organs. The buoyancy of water and natural hydrostatic pressure release the tension of the body weight and relax the mind and body. Sufficient hydration (3lts of water per day) is an essential requirement to transport nutrients and remove waste byproducts to process through the liver, kidney, and skin. Proper nutritional intake is an added advantage to optimize the efficiency of this method which aids in a fast recovery process. Within 20-30 days of daily treatment, triglycerides, low-density lipoproteins (LDL), and total cholesterol reduction were observed in patients with abnormal levels of lipid profile. Borderline patients were cleared within 10–15 days of treatment. This is a highly efficient system that provides many benefits and is able to achieve a successful reduction of triglycerides, LDL, and total cholesterol within a short period of time. Supported by proper hydration and nutritional balance, this system of natural treatment maintains healthy levels of lipids in the blood and avoids the risk of cerebral stroke, high blood pressure, and heart attacks.

Keywords: atherosclerosis, cholesterol, hydrotherapy, hyperlipidemia, lipid management, ozone therapy, triglycerides

Procedia PDF Downloads 81
86 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 97
85 Intensification of Wet Air Oxidation of Landfill Leachate Reverse Osmosis Concentrates

Authors: Emilie Gout, Mathias Monnot, Olivier Boutin, Pierre Vanloot, Philippe Moulin

Abstract:

Water is a precious resource. Treating industrial wastewater remains a considerable technical challenge of our century. The effluent considered for this study is landfill leachate treated by reverse osmosis (RO). Nowadays, in most developed countries, sanitary landfilling is the main method to deal with municipal solid waste. Rainwater percolates through solid waste, generating leachates mostly comprised of organic and inorganic matter. Whilst leachate ages, its composition varies, becoming more and more bio-refractory. RO is already used for landfill leachates as it generates good quality permeate. However, its mains drawback is the production of highly polluted concentrates that cannot be discharged in the environment or reused, which is an important industrial issue. It is against this background that the study of coupling RO with wet air oxidation (WAO) was set to intensify and optimize processes to meet current regulations for water discharge in the environment. WAO is widely studied for effluents containing bio-refractory compounds. Oxidation consists of a destruction reaction capable of mineralizing the recalcitrant organic fraction of pollution into carbon dioxide and water when complete. WAO process in subcritical conditions requires a high-energy consumption, but it can be autothermic in a certain range of chemical oxygen demand (COD) concentrations (10-100 g.L⁻¹). Appropriate COD concentrations are reached in landfill leachate RO concentrates. Therefore, the purpose of this work is to report the performances of mineralization during WAO on RO concentrates. The coupling of RO/WAO has shown promising results in previous works on both synthetic and real effluents in terms of organic carbon (TOC) reduction by WAO and retention by RO. Non-catalytic WAO with air as oxidizer was performed in a lab-scale stirred autoclave (1 L) on landfill leachates RO concentrates collected in different seasons in a sanitary landfill in southern France. The yield of WAO depends on operating parameters such as total pressure, temperature, and time. Compositions of the effluent are also important aspects for process intensification. An experimental design methodology was used to minimize the number of experiments whilst finding the operating conditions achieving the best pollution reduction. The simulation led to a set of 18 experiments, and the responses to highlight process efficiency are pH, conductivity, turbidity, COD, TOC, and inorganic carbon. A 70% oxygen excess was chosen for all the experiments. First experiments showed that COD and TOC abatements of at least 70% were obtained after 90 min at 300°C and 20 MPa, which attested the possibility to treat RO leachate concentrates with WAO. In order to meet French regulations and validate process intensification with industrial effluents, some continuous experiments in a bubble column are foreseen, and some further analyses will be performed, such as biological oxygen demand and study of gas composition. Meanwhile, other industrial effluents are treated to compare RO-WAO performances. These effluents, coming from pharmaceutical, petrochemical, and tertiary wastewater industries, present different specific pollutants that will provide a better comprehension of the hybrid process and prove the intensification and feasibility of the process at an industrial scale. Acknowledgments: This work has been supported by the French National Research Agency (ANR) for the Project TEMPO under the reference number ANR-19-CE04-0002-01.

Keywords: hybrid process, landfill leachates, process intensification, reverse osmosis, wet air oxidation

Procedia PDF Downloads 128
84 A Shift in Approach from Cereal Based Diet to Dietary Diversity in India: A Case Study of Aligarh District

Authors: Abha Gupta, Deepak K. Mishra

Abstract:

Food security issue in India has surrounded over availability and accessibility of cereal which is regarded as the only food group to check hunger and improve nutrition. Significance of fruits, vegetables, meat and other food products have totally been neglected given the fact that they provide essential nutrients to the body. There is a need to shift the emphasis from cereal-based approach to a more diverse diet so that aim of achieving food security may change from just reducing hunger to an overall health. This paper attempts to analyse how far dietary diversity level has been achieved across different socio-economic groups in India. For this purpose, present paper sets objectives to determine (a) percentage share of different food groups to total food expenditure and consumption by background characteristics (b) source of and preference for all food items and, (c) diversity of diet across socio-economic groups. A cross sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of Uttar Pradesh, India. Information on amount of food consumed, source of consumption and expenditure on food (74 food items grouped into 10 major food groups) was collected with a recall period of seven days. Per capita per day food consumption/expenditure was calculated through dividing consumption/expenditure by household size and number seven. Food variety score was estimated by giving 0 values to those food groups/items which had not been eaten and 1 to those which had been taken by households in last seven days. Addition of all food group/item score gave result of food variety score. Diversity of diet was computed using Herfindahl-Hirschman index. Findings of the paper show that cereal, milk, roots and tuber food groups contribute a major share in total consumption/expenditure. Consumption of these food groups vary across socio-economic groups whereas fruit, vegetables, meat and other food consumption remain low and same. Estimation of dietary diversity show higher concentration of diet due to higher consumption of cereals, milk, root and tuber products and dietary diversity slightly varies across background groups. Muslims, Scheduled caste, small farmers, lower income class, food insecure, below poverty line and labour families show higher concentration of diet as compared to their counterpart groups. These groups also evince lower mean intake of number of food item in a week due to poor economic constraints and resultant lower accessibility to number of expensive food items. Results advocate to make a shift from cereal based diet to dietary diversity which not only includes cereal and milk products but also nutrition rich food items such as fruits, vegetables, meat and other products. Integrating a dietary diversity approach in food security programmes of the country would help to achieve nutrition security as hidden hunger is widespread among the Indian population.

Keywords: dietary diversity, food Security, India, socio-economic groups

Procedia PDF Downloads 332
83 Use of computer and peripherals in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar Mehrafarin, Reza Mehrafarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: archaeological surveys, computer use, iran, modern technologies, sistan

Procedia PDF Downloads 65
82 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 150
81 Detection of Antibiotic Resistance Genes and Antibiotic Residues in Plant-based Products

Authors: Morello Sara, Pederiva Sabina, Bianchi Manila, Martucci Francesca, Marchis Daniela, Decastelli Lucia

Abstract:

Vegetables represent an integral part of a healthy diet due to their valuable nutritional properties and the growth in consumer demand in recent years is particularly remarkable for a diet rich in vitamins and micronutrients. However, plant-based products are involved in several food outbreaks connected to various sources of contamination and quite often, bacteria responsible for side effects showed high resistance to antibiotics. The abuse of antibiotics can be one of the main mechanisms responsible for increasing antibiotic resistance (AR). Plants grown for food use can be contaminated directly by spraying antibiotics on crops or indirectly by treatments with antibiotics due to the use of manure, which may contain both antibiotics and genes of antibiotic resistance (ARG). Antibiotic residues could represent a potential way of human health risk due to exposure through the consumption of plant-based foods. The presence of antibiotic-resistant bacteria might pose a particular risk to consumers. The present work aims to investigate through a multidisciplinary approach the occurrence of ARG by means of a biomolecular approach (PCR) and the prevalence of antibiotic residues using a multi residues LC-MS/MS method, both in different plant-based products. During the period from July 2020 to October 2021, a total of 74 plant samples (33 lettuces and 41 tomatoes) were collected from 57 farms located throughout the Piedmont area, and18 out of 74 samples (11 lettuces and 7 tomatoes) were selected to LC-MS/MS analyses. DNA extracted (ExtractME, Blirt, Poland) from plants used on crops and isolated bacteria were analyzed with 6 sets of end-point multiplex PCR (Qiagen, Germany) to detect the presence of resistance genes of the main antibiotic families, such as tet genes (tetracyclines), bla (β-lactams) and mcr (colistin). Simultaneous detection of 43 molecules of antibiotics belonging to 10 different classes (tetracyclines, sulphonamides, quinolones, penicillins, amphenicols, macrolides, pleuromotilines, lincosamides, diaminopyrimidines) was performed using Exion LC system AB SCIEX coupled to a triple quadrupole mass spectrometer QTRAP 5500 from AB SCIEX. The PCR assays showed the presence of ARG in 57% (n=42): tetB (4.8%; n=2), tetA (9.5%; n=4), tetE (2.4%; n=1), tetL (12%; n=5), tetM (26%; n=11), blaSHV (21.5%; n=9), blaTEM (4.8%; n =2) and blaCTX-M (19%; n=8). In none of the analyzed samples was the mcr gene responsible for colistin resistance detected. Results obtained from LC-MS/MS analyses showed that none of the tested antibiotics appear to exceed the LOQ (100 ppb). Data obtained confirmed the presence of bacterial populations containing antibiotic resistance determinants such as tet gene (tetracycline) and bla genes (beta-lactams), widely used in human medicine, which can join the food chain and represent a risk for consumers, especially with raw products. The presence of traces of antibiotic residues in vegetables, in concentration below the LOQ of the LC-MS/MS method applied, cannot be excluded. In conclusion, traces of antibiotic residues could be a health risk to the consumer due to potential involvement in the spread of AR. PCR represents a useful and effective approach to characterize and monitor AR carried by bacteria from the entire food chain.

Keywords: plant-based products, ARG, PCR, antibiotic residues

Procedia PDF Downloads 78
80 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method

Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan

Abstract:

Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.

Keywords: Bangladesh, end of life, e-waste, material flow analysis

Procedia PDF Downloads 180
79 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 247
78 Investigating the Thermal Comfort Properties of Mohair Fabrics

Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman

Abstract:

Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.

Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance

Procedia PDF Downloads 134
77 Magnesium Nanoparticles for Photothermal Therapy

Authors: E. Locatelli, I. Monaco, R. C. Martin, Y. Li, R. Pini, M. Chiariello, M. Comes Franchini

Abstract:

Despite the many advantages of application of nanomaterials in the field of nanomedicine, increasing concerns have been expressed on their potential adverse effects on human health. There is urgency for novel green strategies toward novel materials with enhanced biocompatibility using safe reagents. Photothermal ablation therapy, which exploits localized heat increase of a few degrees to kill cancer cells, has appeared recently as a non-invasive and highly efficient therapy against various cancer types; anyway new agents able to generate hyperthermia when irradiated are needed and must have precise biocompatibility in order to avoid damage to healthy tissues and prevent toxicity. Recently, there has been increasing interest in magnesium as a biomaterial: it is the fourth most abundant cation in the human body, and it is essential for human metabolism. However magnesium nanoparticles (Mg NPs) have had limited diffusion due to the high reduction potential of magnesium cations, which makes NPs synthesis challenging. Herein, we report the synthesis of Mg NPs and their surface functionalization for the obtainment of a stable and biocompatible nanomaterial suitable for photothermal ablation therapy against cancer. We synthesized the Mg crystals by reducing MgCl2 with metallic lithium and exploiting naphthalene as an electron carrier: the lithium–naphthalene complex acts as the real reducing agent. Firstly, the nanocrystal particles were coated with the ligand 12-ethoxy ester dodecanehydroxamic acid, and then entrapped into water-dispersible polymeric micelles (PMs) made of the FDA-approved PLGA-b-PEG-COOH copolymer using the oil-in-water emulsion technique. Lately, we developed a more straightforward methodology by introducing chitosan, a highly biocompatible natural product, at the beginning of the process, simultaneously using lithium–naphthalene complex, thus having a one-pot procedure for the formation and surface modification of MgNPs. The obtained MgNPs were purified and fully characterized, showing diameters in the range of 50-300 nm. Notably, when coated with chitosan the particles remained stable as dry powder for more than 10 months. We proved the possibility of generating a temperature rise of a few to several degrees once MgNPs were illuminated using a 810 nm diode laser operating in continuous wave mode: the temperature rise resulted significant (0-15 °C) and concentration dependent. We then investigated potential cytotoxicity of the MgNPs: we used HN13 epithelial cells, derived from a head and neck squamous cell carcinoma and the hepa1-6 cell line, derived from hepatocellular carcinoma and very low toxicity was observed for both nanosystems. Finally, in vivo photothermal therapy was performed on xenograft hepa1-6 tumor bearing mice: the animals were treated with MgNPs coated with chitosan and showed no sign of suffering after the injection. After 12 hours the tumor was exposed to near-infrared laser light. The results clearly showed an extensive damage to tumor tissue after only 2 minutes of laser irradiation at 3Wcm-1, while no damage was reported when the tumor was treated with the laser and saline alone in control group. Despite the lower photothermal efficiency of Mg with respect to Au NPs, we consider MgNPs a promising, safe and green candidate for future clinical translations.

Keywords: chitosan, magnesium nanoparticles, nanomedicine, photothermal therapy

Procedia PDF Downloads 261
76 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 60
75 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data

Authors: Linna Li

Abstract:

The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.

Keywords: geovisualization, human mobility pattern, Los Angeles, social media

Procedia PDF Downloads 106
74 Taking the Good with the Bad: Psychological Well-Being and Social Integration in Russian-Speaking Immigrants in Montreal

Authors: Momoka Sunohara, Ashley J. Lemieux, Esther Yakobov, Andrew G. Ryder, Tomas Jurcik

Abstract:

Immigration brings changes in many aspects of an individual's life, from social support dynamics, to housing and language, as well as difficulties with regards to discrimination, trauma, and loss. Past research has mostly emphasized individual differences in mental health and has neglected the impact of social-ecological context, such as acculturation and ethnic density. Purpose: The present study aimed to assess the relationship between variables associated with social integration such as perceived ethnic density and ways of coping, as well as psychological adjustment in a rapidly growing non-visible minority group of immigrants in Canada. Data: A small subset of an archival data from our previously published study was reanalyzed with additional variables. Data included information from 269 Russian-Speaking immigrants in Montreal, Canada. Method: Canonical correlation analysis (CCA) investigated the relationship between two sets of variables. SAS PROC CANCORR was used to conduct CCA on a set of social integration variables, including ethnic density, discrimination, social support, family functioning, and acculturation, and a set of psychological well-being variables, including distress, depression, self-esteem, and life satisfaction. In addition, canonical redundancy analysis was performed to calculate the proportion of variances of original variables explained by their own canonical variates. Results: Significance tests using Rao’s F statistics indicated that the first two canonical correlations (i.e., r1 = 0.64, r2 = 0.40) were statistically significant (p-value < 0.0001). Additionally, canonical redundancy analysis showed that the first two well-being canonical variates explained separately 62.9% and 12.8% variances of the standardized well-being variables, whereas the first two social integration canonical variates explained separately 14.7% and 16.7% variances of the standardized social integration variables. These results support the selection of the first two canonical correlations. Then, we interpreted the derived canonical variates based on their canonical structure (i.e., correlations with original variables). Two observations can be concluded. First, individuals who have adequate social support, and who, as a family, cope by acquiring social support, mobilizing others and reframing are more likely to have better self-esteem, greater life satisfaction and experience less feelings of depression or distress. Second, individuals who feel discriminated yet rate higher on a mainstream acculturation scale, and who, as a family, cope by acquiring social support, mobilizing others and using spirituality, while using less passive strategies are more likely to have better life satisfaction but also higher degree of depression. Implications: This model may serve to explain the complex interactions that exist between social and emotional adjustment and aid in facilitating the integration of individuals immigrating into new communities. The same group may experience greater depression but paradoxically improved life satisfaction associated with their coping process. Such findings need to be placed in the context of Russian cultural values. For instance, some Russian-speakers may value the expression of negative emotions with significant others during the integration process; this in turn may make negative emotions more salient, but also facilitate a greater sense of family and community connection, as well as life satisfaction.

Keywords: acculturation, ethnic density, mental health, Russian-speaking

Procedia PDF Downloads 469
73 Law of the River and Indigenous Water Rights: Reassessing the International Legal Frameworks for Indigenous Rights and Water Justice

Authors: Sultana Afrin Nipa

Abstract:

Life on Earth cannot thrive or survive without water. Water is intimately tied with community, culture, spirituality, identity, socio-economic progress, security, self-determination, and livelihood. Thus, access to water is a United Nations recognized human right due to its significance in these realms. However, there is often conflict between those who consider water as the spiritual and cultural value and those who consider it an economic value thus being threatened by economic development, corporate exploitation, government regulation, and increased privatization, highlighting the complex relationship between water and culture. The Colorado River basin is home to over 29 federally recognized tribal nations. To these tribes, it holds cultural, economic, and spiritual significance and often extends to deep human-to-non-human connections frequently precluded by the Westphalian regulations and settler laws. Despite the recognition of access to rivers as a fundamental human right by the United Nations, tribal communities and their water rights have been historically disregarded through inter alia, colonization, and dispossession of their resources. Law of the River such as ‘Winter’s Doctrine’, ‘Bureau of Reclamation (BOR)’ and ‘Colorado River Compact’ have shaped the water governance among the shareholders. However, tribal communities have been systematically excluded from these key agreements. While the Winter’s Doctrine acknowledged that tribes have the right to withdraw water from the rivers that pass through their reservations for self-sufficiency, the establishment of the BOR led to the construction of dams without tribal consultation, denying the ‘Winters’ regulation and violating these rights. The Colorado River Compact, which granted only 20% of the water to the tribes, diminishes the significance of international legal frameworks that prioritize indigenous self-determination and free pursuit of socio-economic and cultural development. Denial of this basic water right is the denial of the ‘recognition’ of their sovereignty and self-determination that questions the effectiveness of the international law. This review assesses the international legal frameworks concerning indigenous rights and water justice and aims to pinpoint gaps hindering the effective recognition and protection of Indigenous water rights in Colorado River Basin. This study draws on a combination of historical and qualitative data sets. The historical data encompasses the case settlements provided by the Bureau of Reclamation (BOR) respectively the notable cases of Native American water rights settlements on lower Colorado basin related to Arizona from 1979-2008. This material serves to substantiate the context of promises made to the Indigenous people and establishes connections between existing entities. The qualitative data consists of the observation of recorded meetings of the Central Arizona Project (CAP) to evaluate how the previously made promises are reflected now. The study finds a significant inconsistency in participation in the decision-making process and the lack of representation of Native American tribes in water resource management discussions. It highlights the ongoing challenges faced by the indigenous people to achieve their self-determination goal despite the legal arrangements.

Keywords: colorado river, indigenous rights, law of the river, water governance, water justice

Procedia PDF Downloads 22
72 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 211
71 Moodle-Based E-Learning Course Development for Medical Interpreters

Authors: Naoko Ono, Junko Kato

Abstract:

According to the Ministry of Justice, 9,044,000 foreigners visited Japan in 2010. The number of foreign residents in Japan was over 2,134,000 at the end of 2010. Further, medical tourism has emerged as a new area of business. Against this background, language barriers put the health of foreigners in Japan at risk, because they have difficulty in accessing health care and communicating with medical professionals. Medical interpreting training is urgently needed in response to language problems resulting from the rapid increase in the number of foreign workers in Japan over recent decades. Especially, there is a growing need in medical settings in Japan to speak international languages for communication, with Tokyo selected as the host city of the 2020 Summer Olympics. Due to the limited number of practical activities on medical interpreting, it is difficult for learners to acquire the interpreting skills. In order to eliminate the shortcoming, a web-based English-Japanese medical interpreting training system was developed. We conducted a literature review to identify learning contents, core competencies for medical interpreters by using Pubmed, PsycINFO, Cochrane Library, and Google Scholar. Selected papers were investigated to find core competencies in medical interpreting. Eleven papers were selected through literature review indicating core competencies for medical interpreters. Core competencies in medical interpreting abstracted from the literature review, showed consistency in previous research whilst the content of the programs varied in domestic and international training programs for medical interpreters. Results of the systematic review indicated five core competencies: (a) maintaining accuracy and completeness; (b) medical terminology and understanding the human body; (c) behaving ethically and making ethical decisions; (d) nonverbal communication skills; and (e) cross-cultural communication skills. We developed an e-leaning program for training medical interpreters. A Web-based Medical Interpreter Training Program which cover these competencies was developed. The program included the following : online word list (Quizlet), allowing student to study online and on their smartphones; self-study tool (Quizlet) for help with dictation and spelling; word quiz (Quizlet); test-generating system (Quizlet); Interactive body game (BBC);Online resource for understanding code of ethics in medical interpreting; Webinar about non-verbal communication; and Webinar about incompetent vs. competent cultural care. The design of a virtual environment allows the execution of complementary experimental exercises for learners of medical interpreting and introduction to theoretical background of medical interpreting. Since this system adopts a self-learning style, it might improve the time and lack of teaching material restrictions of the classroom method. In addition, as a teaching aid, virtual medical interpreting is a powerful resource for the understanding how actual medical interpreting can be carried out. The developed e-learning system allows remote access, enabling students to perform experiments at their own place, without being physically in the actual laboratory. The web-based virtual environment empowers students by granting them access to laboratories during their free time. A practical example will be presented in order to show capabilities of the system. The developed web-based training program for medical interpreters could bridge the gap between medical professionals and patients with limited English proficiency.

Keywords: e-learning, language education, moodle, medical interpreting

Procedia PDF Downloads 349
70 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 148
69 Tailoring Structural, Thermal and Luminescent Properties of Solid-State MIL-53(Al) MOF via Fe³⁺ Cation Exchange

Authors: T. Ul Rehman, S. Agnello, F. M. Gelardi, M. M. Calvino, G. Lazzara, G. Buscarino, M. Cannas

Abstract:

Metal-Organic Frameworks (MOFs) have emerged as promising candidates for detecting metal ions owing to their large surface area, customizable porosity, and diverse functionalities. In recent years, there has been a surge in research focused on MOFs with luminescent properties. These frameworks are constructed through coordinated bonding between metal ions and multi-dentate ligands, resulting in inherent fluorescent structures. Their luminescent behavior is influenced by factors like structural composition, surface morphology, pore volume, and interactions with target analytes, particularly metal ions. MOFs exhibit various sensing mechanisms, including photo-induced electron transfer (PET) and charge transfer processes such as ligand-to-metal (LMCT) and metal-to-ligand (MLCT) transitions. Among these, MIL-53(Al) stands out due to its flexibility, stability, and specific affinity towards certain metal ions, making it a promising platform for selective metal ion sensing. This study investigates the structural, thermal, and luminescent properties of MIL-53(Al) metal-organic framework (MOF) upon Fe3+ cation exchange. Two separate sets of samples were prepared to activate the MOF powder at different temperatures. The first set of samples, referred to as MIL-53(Al), activated (120°C), was prepared by activating the raw powder in a glass tube at 120°C for 12 hours and then sealing it. The second set of samples, referred to as MIL-53(Al), activated (300°C), was prepared by activating the MIL-53(Al) powder in a glass tube at 300°C for 70 hours. Additionally, 25 mg of MIL-53(Al) powder was dispersed in 5 mL of Fe3+ solution at various concentrations (0.1-100 mM) for the cation exchange experiment. The suspension was centrifuged for five minutes at 10,000 rpm to extract MIL-53(Al) powder. After three rounds of washing with ultrapure water, MIL-53(Al) powder was heated at 120°C for 12 hours. For PXRD and TGA analyses, a sample of the obtained MIL-53(Al) was used. We also activated the cation-exchanged samples for time-resolved photoluminescence (TRPL) measurements at two distinct temperatures (120 and 300°C) for comparative analysis. Powder X-ray diffraction patterns reveal amorphization in samples with higher Fe3+ concentrations, attributed to alterations in coordination environments and ion exchange dynamics. Thermal decomposition analysis shows reduced weight loss in Fe3+-exchanged MOFs, indicating enhanced stability due to stronger metal-ligand bonds and altered decomposition pathways. Raman spectroscopy demonstrates intensity decrease, shape disruption, and frequency shifts, indicative of structural perturbations induced by cation exchange. Photoluminescence spectra exhibit ligand-based emission (π-π* or n-π*) and ligand-to-metal charge transfer (LMCT), influenced by activation temperature and Fe3+ incorporation. Quenching of luminescence intensity and shorter lifetimes upon Fe3+ exchange result from structural distortions and Fe3+ binding to organic linkers. In a nutshell, this research underscores the complex interplay between composition, structure, and properties in MOFs, offering insights into their potential for diverse applications in catalysis, gas storage, and luminescent devices.

Keywords: Fe³⁺ cation exchange, luminescent metal-organic frameworks (LMOFs), MIL-53(Al), solid-state analysis

Procedia PDF Downloads 50
68 A Mainstream Aesthetic for African American Female Filmmakers

Authors: Tracy L. F. Worley

Abstract:

This presentation explores the environment that has limited leadership opportunities for Black women in cinema and advocates for autonomy among Black women filmmakers that is facilitated by strong internal and external networks and cooperative opportunities. Early images of African Americans in motion pictures were often conceptualized from the viewpoint of a White male director and depicted by White actors. The black film evolved in opposition to this context, leading to a Black film aesthetic. The oppositional context created in response to racist, misogynistic, and sexist representations in motion pictures sets the tone for female filmmakers of every hue – but especially for African American women. For them, the context of a male gaze, and for all intents and purposes, a White male gaze, forces them to create their own aesthetic. Theoretically, men and women, filmmakers and spectators have different perspectives across race, ethnicity, and gender. Two feminist theorists, bell hooks and Mary Ann Doane, suggest that female filmmakers are perceived as disparate from male filmmakers and that women, in general, are defined by what men see. Mary Ann Doane, a White feminist film theorist, has focused extensively on female spectatorship and women (White) in general as the object of the male gaze. Her discussion of the female body, male perception of it, and feminism in the motion picture industry support the suggestion that comprehending the organization and composition of Hollywood is critical to understanding women’s roles in the industry. Although much of her research addresses the silent film era and women’s roles then, Doane suggests that across cinematic periods, the theory assigned to “cinematic apparatus” is formulated within a context of sexuality. Men and women are viewed and treated differently in cinema (in front of and behind the camera), with women’s attractiveness and allure photographed specifically for the benefit of the “spectatorial desire” of the male gaze. Bell Hooks, an African American feminist writer and theorist with more than 30 published books and articles on race, gender, class, and culture in feminism and education, suggests that women can overcome the male gaze by using their “oppositional gaze” to transform reality and establish their own truth. She addresses gender within the context of race by acknowledging the realities faced by African American women and the fact that the feminist movement was never intended to include Black women. A grounded theory study led to the development of a leadership theory that explains why African American women are disproportionately represented in a mainstream motion picture leadership. The study helped to reveal the barriers to entry and illuminated potential strategies that African American female motion picture directors might pursue to reduce this inequity. Using semi-structured interviews as the primary means for data collection, the lived experiences of African American female directors and organizational leadership’s perceived role in the perpetuation of negative female imagery in major motion pictures led to the identification of support strategies for African American female motion picture directors that counter social stereotyping and validate the need for social networking in the mainstream.

Keywords: African American, cinema, directors, filmmaking, leadership, women

Procedia PDF Downloads 52
67 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden

Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson

Abstract:

Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.

Keywords: ehealth, elderly, evaluation, intervention, multi-cultural

Procedia PDF Downloads 310
66 The Practices Perspective in Communication, Consumer and Cultural Studies: A Post-Heideggerian Narrative

Authors: Tony Wilson

Abstract:

This paper sets out a practices perspective or practices theory, which has become pervasive from business to sociological studies. In doing so, it locates the perspective historically (in the work of the philosopher Heidegger) and provides a contemporary illustration of its application to communication, consumer and cultural studies as central to this conference theme. The structured account of practices (as articulated in eight ‘axioms’) presented towards the conclusion of this paper is an initial statement - planned to encourage further detailed qualitative and systematic research in areas of interest to the conference. Practice theories of equipped and situated construction of participatory meaning (as in media and marketing consuming) are frequently characterized as lacking common ground, or core principles. This paper explores whether by retracing a journey to earlier philosophical underwriting, a shared territory promoting new research can be located as current philosophical hermeneutics. Moreover, through returning to hermeneutic first principles, the paper shows that a series of spatio-temporal metaphors become available - appropriate to analyzing communication as a process across disciplines in which it is considered. Thus one can argue, for instance, that media users engage (enter) digital text from their diverse ‘horizons of expectation’, in a productive enlarging ‘fusion’ of horizons of understanding, thereby ‘projecting’ a new narrative, integrated in a ‘hermeneutic circle’ of meaning. A politics of communication studies may contest a horizon of understanding - so engaging in critical ‘distancing’. Marketing’s consumers can occupy particular places on a horizon of understanding. Media users pass over borders of changing, revised perspectives. Practices research can now not only be discerned in multiple disciplines but equally crosses disciplines. The ubiquitous practice of media use by managers and visitors in a shopping mall - the mediatization of malls - responds to investigating not just with media study expertise, but from an interpretive marketing perspective. How have mediated identities of person or place been changed? Emphasizing understanding of entities in a material environment as ‘equipment’, practices theory enables the quantitative correlation of use and demographic variable as ‘Zeug Score’. Human behavior is fundamentally habitual - shaped by its tacit assumptions - occasionally interrupted by reflection. Practices theory acknowledges such action to be minimally monitored yet nonetheless considers it as constructing narrative. Thus presented in research, ‘storied’ behavior can then be seen to be (in)formed and shaped from a shifting hierarchy of ‘horizons’ or of perspectives - from habituated to reflective - rather than a single seamless narrative. Taking a communication practices perspective here avoids conflating tacit, transformative and theoretical understanding in research. In short, a historically grounded and unifying statement of contemporary practices theory will enhance its potential as a tool in communication, consumer and cultural research, landscaping interpretative horizons of human behaviour through exploring widely the culturally (in)formed narratives equipping and incorporated (reflectively, unreflectively) in people’s everyday lives.

Keywords: communication, consumer, cultural practices, hermeneutics

Procedia PDF Downloads 257
65 Tectonics of Out-of-Sequence Thrusting in NW Himachal Himalaya, India

Authors: Rajkumar Ghosh

Abstract:

Jhakri Thrust (JT), Sarahan Thrust (ST), and Chaura Thrust (CT) are the three OOST along Jakhri-Chaura segment along the Sutlej river valley in Himachal Pradesh. CT is deciphered only by Apatite Fission Track dating. Such geochronological information is not currently accessible for the Jhakri and Sarahan thrusts. JT was additionally validated as OOST without any dating. The described rock types include ductile sheared gneisses and upper greenschist-amphibolite facies metamorphosed schists. Locally, the Munsiari (Jutogh) Thrust is referred to as the JT. Brittle shear, the JT, borders the research area's southern and ductile shear, the CT, and its northern margins. The JT has a 50° western dip and is south-westward verging. It is 15–17 km deep. A progressive rise in strain towards the JT zone based on microstructural tests was observed by previous researchers. The high-temperature ranges of the MCT root zone are cited in the current work as supportive evidence for the ductile nature of the OOST. In Himachal Pradesh, the lithological boundaries for OOST are not set. In contrast, the Sarahan thrust is NW-SE striking and 50-80 m wide. ST and CT are probably equivalent and marked by a sheared biotite-chlorite matrix with a top-to-SE kinematic indicator. It is inferred from cross-section balancing that the CT is folded with this anticlinorium. These thrust systems consist of several branches, some of which are still active. The thrust system exhibits complex internal geometry consisting of box folds, boudins, scar folds, crenulation cleavages, kink folds, and tension gashes. Box folds are observed on the hanging wall of the Chaura thrust. The ductile signature of CT represents steepen downward of the thrust. After the STDSU stopped deformation, out-of-sequence thrust was initiated in some sections of the Higher Himalaya. A part of GHC and part of the LH is thrust southwestward along the Jutogh Thrust/Munsiari Thrust/JT as also the Jutogh Nappe. The CT is concealed beneath Jutogh Thrust sheet hence the basal part of GHC is unexposed to the surface in Sutlej River section. Fieldwork and micro-structural studies of the Greater Himalayan Crystalline (GHC) along the Sutlej section reveal (a) initial top-to-SW sense of ductile shearing (CT); (b) brittle-ductile extension (ST); and (c) uniform top-to-SW sense of brittle shearing (JT). A group of samples of schistose rock from Jutogh Group of Greater Himalayan Crystalline and Quartzite from Rampur Group of Lesser Himalayan Crystalline were analyzed. No such physiographic transition in that area is to determine a break in the landscape due to OOST. OOSTs from GHC are interpreted mainly from geochronological studies to date, but proper field evidence is missing. Apart from minimal documentation in geological mapping for OOST, there exists a lack of suitable exposure of rock to generalize the features of OOST in the field in NW Higher Himalaya. Multiple sets of thrust planes may be activated within this zone or a zone along which OOST is engaged.

Keywords: out-of-sequence thrust, main central thrust, grain boundary migration, South Tibetan detachment system, Jakhri Thrust, Sarahan Thrust, Chaura Thrust, higher Himalaya, greater Himalayan crystalline

Procedia PDF Downloads 64
64 Characterizing the Spatially Distributed Differences in the Operational Performance of Solar Power Plants Considering Input Volatility: Evidence from China

Authors: Bai-Chen Xie, Xian-Peng Chen

Abstract:

China has become the world's largest energy producer and consumer, and its development of renewable energy is of great significance to global energy governance and the fight against climate change. The rapid growth of solar power in China could help achieve its ambitious carbon peak and carbon neutrality targets early. However, the non-technical costs of solar power in China are much higher than at international levels, meaning that inefficiencies are rooted in poor management and improper policy design and that efficiency distortions have become a serious challenge to the sustainable development of the renewable energy industry. Unlike fossil energy generation technologies, the output of solar power is closely related to the volatile solar resource, and the spatial unevenness of solar resource distribution leads to potential efficiency spatial distribution differences. It is necessary to develop an efficiency evaluation method that considers the volatility of solar resources and explores the mechanism of the influence of natural geography and social environment on the spatially varying characteristics of efficiency distribution to uncover the root causes of managing inefficiencies. The study sets solar resources as stochastic inputs, introduces a chance-constrained data envelopment analysis model combined with the directional distance function, and measures the solar resource utilization efficiency of 222 solar power plants in representative photovoltaic bases in northwestern China. By the meta-frontier analysis, we measured the characteristics of different power plant clusters and compared the differences among groups, discussed the mechanism of environmental factors influencing inefficiencies, and performed statistical tests through the system generalized method of moments. Rational localization of power plants is a systematic project that requires careful consideration of the full utilization of solar resources, low transmission costs, and power consumption guarantee. Suitable temperature, precipitation, and wind speed can improve the working performance of photovoltaic modules, reasonable terrain inclination can reduce land cost, and the proximity to cities strongly guarantees the consumption of electricity. The density of electricity demand and high-tech industries is more important than resource abundance because they trigger the clustering of power plants to result in a good demonstration and competitive effect. To ensure renewable energy consumption, increased support for rural grids and encouraging direct trading between generators and neighboring users will provide solutions. The study will provide proposals for improving the full life-cycle operational activities of solar power plants in China to reduce high non-technical costs and improve competitiveness against fossil energy sources.

Keywords: solar power plants, environmental factors, data envelopment analysis, efficiency evaluation

Procedia PDF Downloads 76
63 Problem, Policy and Polity in Agenda Setting: Analyzing Safe Motherhood Program in India

Authors: Vanita Singh

Abstract:

In developing countries, there are conflicting political agendas; policy makers have to prioritize issues from a list of issues competing for the limited resources. Thus, it is imperative to understand how some issues gain attention, and others lose in the policy circles. Multiple-Streams Theory of Kingdon (1984) is among the influential theories that help to understand the public policy process and is utilitarian for health policy makers to understand how certain health issues emerge on the policy agendas. The issue of maternal mortality was long standing in India and was linked with high birth rate thus the focus of maternal health policy was on family planning since India’s independence. However, a paradigm shift was noted in the maternal health policy in the year 1992 with the launch of Safe Motherhood Programme and then in the year 2005, when the agenda of maternal health policy became universalizing institutional deliveries and phasing-out of Traditional Birth Attendants (TBAs) from the health system. There were many solutions proposed by policy communities other than universalizing of institutional deliveries, including training of TBAs and improving socio-economic conditions of pregnant women. However, Government of India favored medical community, which was advocating for the policy of universalizing institutional delivery, and neglected the solutions proposed by other policy communities. It took almost 15 years for the advocates of institutional delivery to transform their proposed solution into a program - the Janani Suraksha Yojana (JSY), a safe-motherhood program promoting institutional delivery through cash incentives to pregnant women. Thus, the case of safe motherhood policy in India is worth studying to understand how certain issues/problems gain political attention and how advocacy work in policy circles. This paper attempts to understand the factors that favored the agenda of safe-motherhood in the policy circle in India, using John Kingdon’s Multiple-Stream model of agenda-setting. Through document analysis and literature review, the paper traces the evolution of safe motherhood program and maternal health policy. The study has used open source documents available on the website of Ministry of Health and Family Welfare, media reports (Times of India Archive) and related research papers. The documents analyzed include National health policy-1983, National Health Policy-2002, written reports of Ministry of Health and Family Welfare Department, National Rural Health Mission (NRHM) document, documents related to Janani Suraksha Yojana and research articles related to maternal health programme in India. The study finds that focusing events and credible indicators coupled with media attention has the potential to recognize a problem. The political elites favor clearly defined and well-accepted solutions. The trans-national organizations affect the agenda-setting process in a country through conditional resource provision. The closely-knit policy communities and political entrepreneurship are required for advocating solutions high on agendas. The study has implications for health policy makers in identifying factors that have the potential to affect the agenda-setting process for a desired policy agenda and identify the challenges in generating political priorities.

Keywords: agenda-setting, focusing events, Kingdon’s model, safe motherhood program India

Procedia PDF Downloads 130
62 The Effects of Labeling Cues on Sensory and Affective Responses of Consumers to Categories of Functional Food Carriers: A Mixed Factorial ANOVA Design

Authors: Hedia El Ourabi, Marc Alexandre Tomiuk, Ahmed Khalil Ben Ayed

Abstract:

The aim of this study is to investigate the effects of the labeling cues traceability (T), health claim (HC), and verification of health claim (VHC) on consumer affective response and sensory appeal toward a wide array of functional food carriers (FFC). Predominantly, research in the food area has tended to examine the effects of these information cues independently on cognitive responses to food product offerings. Investigations and findings of potential interaction effects among these factors on effective response and sensory appeal are therefore scant. Moreover, previous studies have typically emphasized single or limited sets of functional food products and categories. In turn, this study considers five food product categories enriched with omega-3 fatty acids, namely: meat products, eggs, cereal products, dairy products and processed fruits and vegetables. It is, therefore, exhaustive in scope rather than exclusive. An investigation of the potential simultaneous effects of these information cues on the affective responses and sensory appeal of consumers should give rise to important insights to both functional food manufacturers and policymakers. A mixed (2 x 3) x (2 x 5) between-within subjects factorial ANOVA design was implemented in this study. T (two levels: completely traceable or non-traceable) and HC (three levels: functional health claim, or disease risk reduction health claim, or disease prevention health claim) were treated as between-subjects factors whereas VHC (two levels: by a government agency and by a non-government agency) and FFC (five food categories) were modeled as within-subjects factors. Subjects were randomly assigned to one of the six between-subjects conditions. A total of 463 questionnaires were obtained from a convenience sample of undergraduate students at various universities in the Montreal and Ottawa areas (in Canada). Consumer affective response and sensory appeal were respectively measured via the following statements assessed on seven-point semantic differential scales: ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unlikeable (1) / Likeable (7)’ and ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unappetizing (1) / Appetizing (7).’ Results revealed a significant interaction effect between HC and VHC on consumer affective response as well as on sensory appeal toward foods enriched with omega-3 fatty acids. On the other hand, the three-way interaction effect between T, HC, and VHC on either of the two dependent variables was not significant. However, the triple interaction effect among T, VHC, and FFC was significant on consumer effective response and the interaction effect among T, HC, and FFC was significant on consumer sensory appeal. Findings of this study should serve as impetus for functional food manufacturers to closely cooperate with policymakers in order to improve on and legitimize the use of health claims in their marketing efforts through credible verification practices and protocols put in place by trusted government agencies. Finally, both functional food manufacturers and retailers may benefit from the socially-responsible image which is conveyed by product offerings whose ingredients remain traceable from farm to kitchen table.

Keywords: functional foods, labeling cues, effective appeal, sensory appeal

Procedia PDF Downloads 155
61 Assessing Organizational Resilience Capacity to Flooding: Index Development and Application to Greek Small & Medium-Sized Enterprises

Authors: Antonis Skouloudis, Konstantinos Evangelinos, Walter Leal-Filho, Panagiotis Vouros, Ioannis Nikolaou

Abstract:

Organizational resilience capacity to extreme weather events (EWEs) has sparked a growth in scholarly attention over the past decade as an essential aspect in business continuity management, with supporting evidence for this claim to suggest that it retains a key role in successful responses to adverse situations, crises and shocks. Small and medium-sized enterprises (SMEs) are more vulnerable to face floods compared to their larger counterparts, so they are disproportionately affected by such extreme weather events. The limited resources at their disposal, the lack of time and skills all conduce to inadequate preparedness to challenges posed by floods. SMEs tend to plan in the short-term, reacting to circumstances as they arise and focussing on their very survival. Likewise, they share less formalised structures and codified policies while they are most usually owner-managed, resulting in a command-and-control management culture. Such characteristics result in them having limited opportunities to recover from flooding and quickly turnaround their operation from a loss making to a profit making one. Scholars frame the capacity of business entities to be resilient upon an EWE disturbance (such as flash floods) as the rate of recovery and restoration of organizational performance to pre-disturbance conditions, the amount of disturbance (i.e. threshold level) a business can absorb before losing structural and/or functional components that will alter or cease operation, as well as the extent to which the organization maintains its function (i.e. impact resistance) before performance levels are driven to zero. Nevertheless, while it seems to be accepted as an essential trait of firms effectively transcending uncertain conditions, research deconstructing the enabling conditions and/or inhibitory factors of SMEs resilience capacity to natural hazards is still sparse, fragmentary and mostly fuelled by anecdotal evidence or normative assumptions. Focusing on the individual level of analysis, i.e. the individual enterprise and its endeavours to succeed, the emergent picture from this relatively new research strand delineates the specification of variables, conceptual relationships or dynamic boundaries of resilience capacity components in an attempt to provide prescriptions for policy-making as well as business management. This study will present the development of a flood resilience capacity index (FRCI) and its application to Greek SMEs. The proposed composite indicator pertains to cognitive, behavioral/managerial and contextual factors that influence an enterprise’s ability to shape effective responses to meet flood challenges. Through the proposed indicator-based approach, an analytical framework is set forth that will help standardize such assessments with the overarching aim of reducing the vulnerability of SMEs to flooding. This will be achieved by identifying major internal and external attributes explaining resilience capacity which is particularly important given the limited resources these enterprises have and that they tend to be primary sources of vulnerabilities in supply chain networks, generating Single Points of Failure (SPOF).

Keywords: Floods, Small & Medium-Sized enterprises, organizational resilience capacity, index development

Procedia PDF Downloads 179