Search results for: dietic product
216 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 378215 Anisakidosis in Turkey: Serological Survey and Risk for Humans
Authors: E. Akdur Öztürk, F. İrvasa Bilgiç, A. Ludovisi , O. Gülbahar, D. Dirim Erdoğan, M. Korkmaz, M. Á. Gómez Morales
Abstract:
Anisakidosis is a zoonotic human fish-borne parasitic disease caused by accidental ingestion of anisakid third-stage larvae (L3) of members of the Anisakidae family present in infected marine fish or cephalopods. Infection with anisakid larvae can lead to gastric, intestinal, extra-gastrointestinal and gastroallergic forms of the disease. Anisakid parasites have been reported in almost all seas, particularly in the Mediterranean Sea. There is a remarkably high level of risk exposure to these zoonotic parasites as they are present in economically and ecologically important fish of Europe. Anisakid L3 larvae have been also detected in several fish species from the Aegean Sea. Turkey is a peninsular country surrounded by Black, Aegean and the Mediterranean Sea. In this country, fishing habit and fishery product consumption are highly common. In recent years, there was also an increase in the consumption of raw fish due to the increasing interest in the cuisine of the Far East countries. In different regions of Turkey, A. simplex (inMerluccius Merluccius Scomber japonicus, Trachurus mediterraneus, Sardina pilchardus, Engraulis encrasicolus, etc.), Anisakis spp., Contraceucum spp., Pseudoterronova spp. and, C. aduncum were identified as well. Although it is accepted both the presence of anisakid parasites in fish and fishery products in Turkey and the presence of Turkish people with allergic manifestations after fish consumption, there are no reports of human anisakiasis in this country. Given the high prevalence of anisakid parasites in the country, the absence of reports is likely not due to the absence of clinical cases rather to the unavailability of diagnostic tools and the low awareness of the presence of this infection. The aim of the study was to set up an IgE-Western Blot (WB) based test to detect the anisakidosis sensitization among Turkish people with a history of allergic manifestation related to fish consumption. To this end, crude worm antigens (CWA) and allergen enriched fraction (50-66% ) were prepared from L3 of A. simplex (s.l.) collected from Lepidopus caudatus fished in the Mediterranean Sea. These proteins were electrophoretically separated and transferred into the nitrocellulose membranes. By WB, specific proteins recognized by positive control serum samples from sensitized patients were visualized on nitrocellulose membranes by a colorimetric reaction. The CWA and 50–66% fraction showed specific bands, mainly due to Ani s 1 (20-22 kD) and Ani s 4 (9-10 kD). So far, a total of 7 serum samples from people with allergic manifestation and positive skin prick test (SPT) after fish consumption, have been tested and all of them resulted negative by WB, indicating the lack of sensitization to anisakids. This preliminary study allowed to set up a specific test and evidence the lack of correlation between both tests, SPT and WB. However, the sample size should be increased to estimate the anisakidosis burden in Turkish people.Keywords: anisakidosis, fish parasite, serodiagnosis, Turkey
Procedia PDF Downloads 141214 Structure Modification of Leonurine to Improve Its Potency as Aphrodisiac
Authors: Ruslin, R. E. Kartasasmita, M. S. Wibowo, S. Ibrahim
Abstract:
An aphrodisiac is a substance contained in food or drug that can arouse sexual instinct and increase pleasure while working, these substances derived from plants, animals, and minerals. When consuming substances that have aphrodisiac activity and duration can improve the sexual instinct. The natural aphrodisiac effect can be obtained through plants, animals, and minerals. Leonurine compound has aphrodisiac activity, these compounds can be isolated from plants of Leonurus Sp, Sundanese people is known as deundereman, this plant is empirical has aphrodisiac activity and based on the isolation of active compounds from plants known to contain compounds leonurine, so that the compound is expected to have activity aphrodisiac. Leonurine compound can be isolated from plants or synthesized chemically with material dasa siringat acid. Leonurine compound can be obtained commercial and derivatives of these compounds can be synthesized in an effort to increase its activity. This study aims to obtain derivatives leonurine better aphrodisiac activity compared with the parent compound, modified the structure of the compounds in the form leonurin guanidino butyl ester group with butyl amin and bromoetanol. ArgusLab program version 4.0.1 is used to determine the binding energy, hydrogen bonds and amino acids involved in the interaction of the compound PDE5 receptor. The in vivo test leonurine compounds and derivatives as an aphrodisiac ingredients and hormone testosterone levels using 27 male rats Wistar strain and 9 female mice of the same species, ages ranged from 12 weeks rats weighing + 200 g / tail. The test animal is divided into 9 groups according to the type of compounds and the dose given. Each treatment group was orally administered 2 ml per day for 5 days. On the sixth day was observed male rat sexual behavior and taking blood from the heart to measure testosterone levels using ELISA technique. Statistical analysis was performed in this study is the ANOVA test Least Square Differences (LSD) using the program Statistical Product and Service Solutions (SPSS). Aphrodisiac efficacy of the leonurine compound and its derivatives have proven in silico and in vivo test, the in silico testing leonurine derivatives have smaller binding energy derivatives leonurine so that activity better than leonurine compounds. Testing in vivo using rats of wistar strain that better leonurine derivative of this compound shows leonurine that in silico studies in parallel with in vivo tests. Modification of the structure in the form of guanidine butyl ester group with butyl amin and bromoethanol increase compared leonurine compound for aphrodisiac activity, testosterone derivatives of compounds leonurine experienced a significant improvement especial is 1RD compounds especially at doses of 100 and 150 mg/bb. The results showed that the compound leonurine and its compounds contain aphrodisiac activity and increase the amount of testosterone in the blood. The compound test used in this study acts as a steroid precursor resulting in increased testosterone.Keywords: aphrodisiac dysfunction erectile leonurine 1-RD 2-RD, dysfunction, erectile leonurine, 1-RD 2-RD
Procedia PDF Downloads 279213 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency
Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader
Abstract:
Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.Keywords: immobilization, reusability, thermozymes, xylanase
Procedia PDF Downloads 374212 Determinants of Sustainable Supplier Selection: An Exploratory Study of Manufacturing Tunisian’s SMEs
Authors: Ahlem Dhahri, Audrey Becuwe
Abstract:
This study examines the adoption of sustainable purchasing practices among Tunisian SMEs, with a focus on assessing how environmental and social sustainability maturity affects the implementation of sustainable supplier selection (SSS) criteria. Using institutional theory to classify coercive, normative, and mimetic pressures, as well as emerging drivers and barriers, this study explores the institutional factors influencing sustainable purchasing practices and the specific barriers faced by Tunisian SMEs in this area. An exploratory, abductive qualitative research design was adopted for this multiple case study, which involved 19 semi-structured interviews with owners and managers of 17 Tunisian manufacturing SMEs. The Gioia method was used to analyze the data, thus enabling the identification of key themes and relationships directly from the raw data. This approach facilitated a structured interpretation of the institutional factors influencing sustainable purchasing practices, with insights drawn from the participants' perspectives. The study reveals that Tunisian SMEs are at different levels of sustainability maturity, with a significant impact on their procurement practices. SMEs with advanced sustainability maturity integrate both environmental and social criteria into their supplier selection processes, while those with lower maturity levels rely on mostly traditional criteria such as cost, quality, and delivery. Key institutional drivers identified include regulatory pressure, market expectations, and stakeholder influence. Additional emerging drivers—such as certifications and standards, economic incentives, environmental commitment as a core value, and group-wide strategic alignment—also play a critical role in driving sustainable procurement. Conversely, the study reveals significant barriers, including economic constraints, limited awareness, and resource limitations. It also identifies three main categories of emerging barriers: (1) logistical and supply chain constraints, including retailer/intermediary dependency, tariff regulations, and a perceived lack of direct responsibility in B2B supply chains; (2) economic and financial constraints; and (3) operational barriers, such as unilateral environmental responsibility, a product-centric focus and the influence of personal relationships. Providing valuable insights into the role of sustainability maturity in supplier selection, this study is the first to explore sustainable procurement practices in the Tunisian SME context. Integrating an analysis of institutional drivers, including emerging incentives and barriers, provides practical implications for SMEs seeking to improve sustainability in procurement. The results highlight the need for stronger regulatory frameworks and support mechanisms to facilitate the adoption of sustainable practices among SMEs in Tunisia.Keywords: Tunisian SME, sustainable supplier selection, institutional theory, determinant, qualitative study
Procedia PDF Downloads 12211 Study on Electromagnetic Plasma Acceleration Using Rotating Magnetic Field Scheme
Authors: Takeru Furuawa, Kohei Takizawa, Daisuke Kuwahara, Shunjiro Shinohara
Abstract:
In the field of a space propulsion, an electric propulsion system has been developed because its fuel efficiency is much higher than a conventional chemical one. However, the practical electric propulsion systems, e.g., an ion engine, have a problem of short lifetime due to a damage of generation and acceleration electrodes of the plasma. A helicon plasma thruster is proposed as a long-lifetime electric thruster which has non-direct contact electrodes. In this system, both generation and acceleration methods of a dense plasma are executed by antennas from the outside of a discharge tube. Development of the helicon plasma thruster has been conducting under the Helicon Electrodeless Advanced Thruster (HEAT) project. Our helicon plasma thruster has two important processes. First, we generate a dense source plasma using a helicon wave with an excitation frequency between an ion and an electron cyclotron frequencies, fci and fce, respectively, applied from the outside of a discharge using a radio frequency (RF) antenna. The helicon plasma source can provide a high-density (~1019 m-3), a high-ionization ratio (up to several tens of percent), and a high particle generation efficiency. Second, in order to achieve high thrust and specific impulse, we accelerate the dense plasma by the axial Lorentz force fz using the product of the induced azimuthal current jθ and the static radial magnetic field Br, shown as fz = jθ × Br. The HEAT project has proposed several kinds of electrodeless acceleration schemes, and in our particular case, a Rotating Magnetic Field (RMF) method has been extensively studied. The RMF scheme was originally developed as a concept to maintain the Field Reversed Configuration (FRC) in a magnetically confined fusion research. Here, RMF coils are expected to generate jθ due to a nonlinear effect shown below. First, the rotating magnetic field Bω is generated by two pairs of RMF coils with AC currents, which have a phase difference of 90 degrees between the pairs. Due to the Faraday’s law, an axial electric field is induced. Second, an axial current is generated by the effects of an electron-ion and an electron-neutral collisions through the Ohm’s law. Third, the azimuthal electric field is generated by the nonlinear term, and the retarding torque generated by the collision effects again. Then, azimuthal current jθ is generated as jθ = - nₑ er ∙ 2π fRMF. Finally, the axial Lorentz force fz for plasma acceleration is generated. Here, jθ is proportional to nₑ and frequency of RMF coil current fRMF, when Bω is fully penetrated into the plasma. Our previous study has achieved 19 % increase of ion velocity using the 5 MHz and 50 A of the RMF coil power supply. In this presentation, we will show the improvement of the ion velocity using the lower frequency and higher current supplied by RMF power supply. In conclusion, helicon high-density plasma production and electromagnetic acceleration by the RMF scheme with a concept of electrodeless condition have been successfully executed.Keywords: electric propulsion, electrodeless thruster, helicon plasma, rotating magnetic field
Procedia PDF Downloads 261210 Knowledge, Attitude, and Practices of Nurses on the Pain Assessment and Management in Level 3 Hospitals in Manila
Authors: Florence Roselle Adalin, Misha Louise Delariarte, Fabbette Laire Lagas, Sarah Emanuelle Mejia, Lika Mizukoshi, Irish Paullen Palomeno, Gibrianne Alistaire Ramos, Danica Pauline Ramos, Josefina Tuazon, Jo Leah Flores
Abstract:
Pain, often a missed and undertreated symptom, affects the quality of life of individuals. Nurses are key players in providing effective pain management to decrease morbidity and mortality of patients in pain. Nurses’ knowledge and attitude on pain greatly affect their ability on assessment and management. The Pain Society of the Philippines recognized the inadequacy and inaccessibility of data on the knowledge, skills, and attitude of nurses on pain management in the country. This study may be the first of its kind in the county, giving it the potential to contribute greatly to nursing education and practice through providing valuable baseline data. Objectives: This study aims to describe the level of knowledge and attitude, and current practices of nurses on pain assessment and management; and determine the relationship of nurses’ knowledge and attitude with years of experience, training on pain management and clinical area of practice. Methodology: A survey research design was employed. Four hospitals were selected through purposive sampling. A total of 235 Medical-Surgical Unit and Intensive Care Unit (ICU) nurses participated in the study. The tool used is a combination of demographic survey, Nurses’ Knowledge and Attitude Survey Regarding Pain (NKASRP), Acute Pain Evidence Based Practice Questionnaire (APEBPQ) with self-report questions on non-pharmacologic pain management. The data obtained was analysed using descriptive statistics, two sample T-tests for clinical areas and training; and Pearson product correlation to identify relationship of level of knowledge and attitude with years of experience. Results and Analysis: The mean knowledge and attitude score of the nurses was 47.14%. Majority answered ‘most of the time’ or ‘all the time’ on 84.12% of practice items on pain assessment, implementation of non-pharmacologic interventions, evaluation and documentation. Three of 19 practice items describing morphine and opioid administration in special populations were only done ‘a little of the time’. Most utilized non-pharmacologic interventions were deep breathing exercises (79.66%), massage therapy (27.54%), and ice therapy (26.69%). There was no significant relationship between knowledge scores and years of clinical experience (p = 0.05, r= -0.09). Moreover, there was not enough evidence to show difference in nurses’ knowledge and attitude scores in relation to presence of training (p = 0.41) or areas (Medical-Surgical or ICU) of clinical practice (p = 0.53). Conclusion and Recommendations: Findings of the study showed that the level of knowledge and attitude of nurses on pain assessment and management is suboptimal; and no relationship between nurses’ knowledge and attitude and years of experience. It is recommended that further studies look into the nursing curriculum on pain education, culture-specific pain management protocols and evidence-based practices in the country.Keywords: knowledge and attitude, nurses, pain management, practices on pain management
Procedia PDF Downloads 348209 Processing of Flexible Dielectric Nanocomposites Using Nanocellulose and Recycled Alum Sludge for Wearable Technology Applications
Authors: D. Sun, L. Saw, A. Onyianta, D. O’Rourke, Z. Lu, C. See, C. Wilson, C. Popescu, M. Dorris
Abstract:
With the rapid development of wearable technology (e.g., smartwatch, activity trackers and health monitor devices), flexible dielectric materials with environmental-friendly, low-cost and high-energy efficiency characteristics are in increasing demand. In this work, a flexible dielectric nanocomposite was processed by incorporating two components: cellulose nanofibrils and alum sludge in a polymer matrix. The two components were used in the reinforcement phase as well as for enhancing the dielectric properties; they were processed using waste materials that would otherwise be disposed to landfills. Alum sludge is a by-product of the water treatment process in which aluminum sulfate is prevalently used as the primary coagulant. According to the data from a project partner-Scottish Water: there are approximately 10k tons of alum sludge generated as a waste from the water treatment work to be landfilled every year in Scotland. The industry has been facing escalating financial and environmental pressure to develop more sustainable strategies to deal with alum sludge wastes. In the available literature, some work on reusing alum sludge has been reported (e.g., aluminum recovery or agriculture and land reclamation). However, little work can be found in applying it to processing energy materials (e.g., dielectrics) for enhanced energy density and efficiency. The alum sludge was collected directly from a water treatment plant of Scottish Water and heat-treated and refined before being used in preparing composites. Cellulose nanofibrils were derived from water hyacinth, an invasive aquatic weed that causes significant ecological issues in tropical regions. The harvested water hyacinth was dried and processed using a cost-effective method, including a chemical extraction followed by a homogenization process in order to extract cellulose nanofibrils. Biodegradable elastomer polydimethylsiloxane (PDMS) was used as the polymer matrix and the nanocomposites were processed by casting raw materials in Petri dishes. The processed composites were characterized using various methods, including scanning electron microscopy (SEM), rheological analysis, thermogravimetric and X-ray diffraction analysis. The SEM result showed that cellulose nanofibrils of approximately 20nm in diameter and 100nm in length were obtained and the alum sludge particles were of approximately 200um in diameters. The TGA/DSC analysis result showed that a weight loss of up to 48% can be seen in the raw material of alum sludge and its crystallization process has been started at approximately 800°C. This observation coincides with the XRD result. Other experiments also showed that the composites exhibit comprehensive mechanical and dielectric performances. This work depicts that it is a sustainable practice of reusing such waste materials in preparing flexible, lightweight and miniature dielectric materials for wearable technology applications.Keywords: cellulose, biodegradable, sustainable, alum sludge, nanocomposite, wearable technology, dielectric
Procedia PDF Downloads 85208 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents
Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme
Abstract:
The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.Keywords: wine, ota removal, food safety, fining
Procedia PDF Downloads 538207 Drivers of Satisfaction and Dissatisfaction in Camping Tourism: A Case Study from Croatia
Authors: Darko Prebežac, Josip Mikulić, Maja Šerić, Damir Krešić
Abstract:
Camping tourism is recognized as a growing segment of the broader tourism industry, currently evolving from an inexpensive, temporary sojourn in a rural environment into a highly fragmented niche tourism sector. The trends among public-managed campgrounds seem to be moving away from rustic campgrounds that provide only a tent pad and a fire ring to more developed facilities that offer a range of different amenities, where campers still search for unique experiences that go above the opportunity to experience nature and social interaction. In addition, while camping styles and options changed significantly over the last years, coastal camping in particular became valorized as is it regarded with a heightened sense of nostalgia. Alongside this growing interest in the camping tourism, a demand for quality servicing infrastructure emerged in order to satisfy the wide variety of needs, wants, and expectations of an increasingly demanding traveling public. However, camping activity in general and quality of camping experience and campers’ satisfaction in particular remain an under-researched area of the tourism and consumption behavior literature. In this line, very few studies addressed the issue of quality product/service provision in satisfying nature based tourists and in driving their future behavior with respect to potential re-visitation and recommendation intention. The present study thus aims to investigate the drivers of positive and negative campsite experience using the case of Croatia. Due to the well-preserved nature and indented coastline, camping tourism has a long tradition in Croatia and represents one of the most important and most developed tourism products. During the last decade the number of tourist overnights in Croatian camps has increased by 26% amounting to 16.5 million in 2014. Moreover, according to Eurostat the market share of campsites in the EU is around 14%, indicating that the market share of Croatian campsites is almost double large compared to the EU average. Currently, there are a total of 250 camps in Croatia with approximately 75.8 thousands accommodation units. It is further noteworthy that Croatian camps have higher average occupancy rates and a higher average length of stay as compared to the national average of all types of accommodation. In order to explore the main drivers of positive and negative campsite experiences, this study uses principal components analysis (PCA) and an impact-asymmetry analysis (IAA). Using the PCA, first the main dimensions of the campsite experience are extracted in an exploratory manner. Using the IAA, the extracted factors are investigated for their potentials to create customer delight and/or frustration. The results provide valuable insight to both researchers and practitioners regarding the understanding of campsite satisfaction.Keywords: Camping tourism, campsite, impact-asymmetry analysis, satisfaction
Procedia PDF Downloads 186206 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit
Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar
Abstract:
Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction
Procedia PDF Downloads 96205 Neoliberalism and Environmental Justice: A Critical Examination of Corporate Greenwashing
Authors: Arnav M. Raval
Abstract:
This paper critically examines the neoliberal economic model and its role in enabling corporate greenwashing, a practice where corporations deceptively market themselves as environmentally responsible while continuing harmful environmental practices. Through a rigorous focus on the neoliberal emphasis of free markets, deregulation, and minimal government intervention, this paper explores how these policies have set the stage for corporations to externalize environmental costs and engage in superficial sustainability initiatives. Within this framework, companies often bypass meaningful environmental reform, opting for strategies that enhance their public image without addressing their actual environmental impacts. The paper also draws on the works of critical theorists Theodor Adorno, Max Horkheimer, and Herbert Marcuse, particularly their critiques of capitalist society and its tendency to commodify social values. This paper argues that neoliberal capitalism has commodified environmentalism, transforming genuine ecological responsibility into a marketable product. Through corporate social responsibility initiatives, corporations have created the illusion of sustainability while masking deeper environmental harm. Under neoliberalism, these initiatives often serve as public relations tools rather than genuine commitments to environmental justice and sustainability. This commodification has become particularly dangerous because as it manipulates consumer perceptions and diverts attention away from the structural causes of environmental degradation. The analysis also examines how greenwashing practices have disproportionately affected marginalized communities, particularly in the global South, where environmental costs are often externalized. As these corporations promote their “sustainability” in wealthier markets, these marginalized communities bear the brunt of their pollution, resource depletion, and other forms of environmental degradation. This dynamic underscores the inherent injustice within neoliberal environmental policies, as those most vulnerable to environmental risks are often neglected, as companies reap the benefits of corporate sustainability efforts at their expense. Finally, this paper calls for a fundamental transition away from neoliberal market-driven solutions, which prioritize corporate profit over genuine ecological reform. It advocates for stronger regulatory frameworks, transparent third-party certifications, and a more collective approach to environmental governance. In order to ensure genuine corporate accountability, governments and institutions must move beyond superficial green initiatives and market-based solutions, shifting toward policies that enforce real environmental responsibility and prioritize environmental justice for all communities. Through the critique of the neoliberal system and its commodification of environmentalism, this paper has highlighted the urgent need to rethink how environmental responsibility is defined and enacted in the corporate world. Without systemic change, greenwashing will continue to undermine both ecological sustainability and social justice, leaving the most vulnerable populations to suffer the consequences.Keywords: critical theory, environmental justice, greenwashing, neoliberalism
Procedia PDF Downloads 17204 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy
Authors: Ye Zhang, Chuanjun Liu
Abstract:
In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.Keywords: composite structure, damage tolerance, impact threat, probabilistic
Procedia PDF Downloads 308203 The Effectiveness of Using Dramatic Conventions as the Teaching Strategy on Self-Efficacy for Children With Autism Spectrum Disorder
Authors: Tso Sheng-Yang, Wang Tien-Ni
Abstract:
Introduction and Purpose: Previous researchers have documented children with ASD (Autism Spectrum Disorders) prefer to escaping internal privates and external privates when they face tough conditions they can’t control or they don’t like.Especially, when children with ASD need to learn challenging tasks, such us Chinese language, their inappropriate behaviors will occur apparently. Recently, researchers apply positive behavior support strategies for children with ASD to enhance their self-efficacy and therefore to reduce their adverse behaviors. Thus, the purpose of this research was to design a series of lecture based on art therapy and to evaluate its effectiveness on the child’s self-efficacy. Method: This research was the single-case design study that recruited a high school boy with ASD. Whole research can be separated into three conditions. First, baseline condition, before the class started and ended, the researcher collected participant’s competencies of self-efficacy every session. In intervention condition, the research used dramatic conventions to teach the child in Chinese language twice a week.When the data was stable across three documents, the period entered to the maintenance condition. In maintenance condition, the researcher only collected the score of self-efficacynot to do other interventions five times a month to represent the effectiveness of maintenance.The time and frequency of data collection among three conditions are identical. Concerning art therapy, the common approach, e.g., music, drama, or painting is to use art medium as independent variable. Due to visual cues of art medium, the ASD can be easily to gain joint attention with teachers. Besides, the ASD have difficulties in understanding abstract objectives Thus, using the drama convention is helpful for the ASD to construct the environment and understand the context of Classical Chinese. By real operation, it can improve the ASD to understand the context and construct prior knowledge. Result: Bassd on the 10-points Likert scale and research, we product following results. (a) In baseline condition, the average score of self-efficacyis 1.12 points, rangedfrom 1 to 2 points, and the level change is 0 point. (b)In intervention condition, the average score of self-efficacy is 7.66 points rangedfrom 7 to 9 points, and the level change is 1 point. (c)In maintenance condition, the average score of self-efficacy is 6.66 points rangedfrom 6 to 7 points, and the level change is 1 point. Concerning immediacy of change, between baseline and intervention conditions, the difference is 5 points. No overlaps were found between these two conditions. Conclusion: According to the result, we find that it is effective that using dramatic conventions a s teaching strategies to teach children with ASD. The result presents the score of self-efficacyimmediately enhances when the dramatic conventions commences. Thus, we suggest the teacher can use this approach and adjust, based on the student’s trait, to teach the ASD on difficult task.Keywords: dramatic conventions, autism spectrum disorder, slef-efficacy, teaching strategy
Procedia PDF Downloads 83202 The Theotokos of the Messina Missal as a Byzantine Icon in Norman Sicily: A Study on Patronage and Devotion
Authors: Jesus Rodriguez Viejo
Abstract:
The aim of this paper is to study cross-cultural interactions between the West and Byzantium, in the fields of art and religion, by analyzing the decoration of one luxury manuscript. The Spanish National Library is home to one of the most extraordinary examples of illuminated manuscript production of Norman Sicily – the Messina Missal. Dating from the late twelfth century, this liturgical book was the result of the intense activity of artistic patronage of an Englishman, Richard Palmer. Appointed bishop of the Sicilian city in the second half of the century, Palmer set a painting workshop attached to his cathedral. The illuminated manuscripts produced there combine a clear Byzantine iconographic language with a myriad of elements imported from France, such as a large number of decorated initials. The most remarkable depiction contained in the Missal is that of the Theotokos (fol. 80r). Its appearance immediately recalls portative Byzantine icons of the Mother of God in South Italy and Byzantium and implies the intervention of an artist familiar with icon painting. The richness of this image is a clear proof of the prestige that Byzantine art enjoyed in the island after the Norman takeover. The production of the school of Messina under Richard Palmer could be considered a counterpart in the field of manuscript illumination of the court art of the Sicilian kings in Palermo and the impressive commissions for the cathedrals of Monreale and Cefalù. However, the ethnic composition of Palmer’s workshop has never been analyzed and therefore, we intend to shed light on the permanent presence of Greek-speaking artists in Norman Messina. The east of the island was the last stronghold of the Greeks and soon after the Norman conquest, the previous exchanges between the cities of this territory and Byzantium restarted again, mainly by way of trade. Palmer was not a Norman statesman, but a churchman and his love for religion and culture prevailed over the wars and struggles for power of the Sicilian kingdom in the central Mediterranean. On the other hand, the representation of the Theotokos can prove that Eastern devotional approaches to images were still common in the east of the island more than a century after the collapse of Byzantine rule. Local Norman lords repeatedly founded churches devoted to Greek saints and medieval Greek-speaking authors were widely copied in Sicilian scriptoria. The Madrid Missal and its Theotokos are doubtless the product of Western initiative but in a land culturally dominated by Byzantium. Westerners, such as Palmer and his circle, could have been immersed in this Hellenophile culture and therefore, naturally predisposed to perform prayers and rituals, in both public and private contexts, linked to ideas and practices of Greek origin, such as the concept of icon.Keywords: history of art, byzantine art, manuscripts, norman sicily, messina, patronage, devotion, iconography
Procedia PDF Downloads 350201 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog
Authors: Ana Flavia Belchior De Andrade
Abstract:
Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.Keywords: backlog, forensic laboratory, quality management, accreditation
Procedia PDF Downloads 122200 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector
Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini
Abstract:
Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products
Procedia PDF Downloads 151199 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 183198 Fodder Production and Livestock Rearing in Relation to Climate Change and Possible Adaptation Measures in Manaslu Conservation Area, Nepal
Authors: Bhojan Dhakal, Naba Raj Devkota, Chet Raj Upreti, Maheshwar Sapkota
Abstract:
A study was conducted to find out the production potential, nutrient composition, and the variability of the most commonly available fodder trees along with the varying altitude to help optimize the dry matter requirement during winter lean period. The study was carried out from March to June, 2012 in Lho and Prok Village Development Committee of Manaslu Conservation Area (MCA), located in Gorkha district of Nepal. The other objective of the research was to learn the impact of climate change on livestock production linking it with feed availability. The study was conducted in two parts: social and biological. Accordingly, a households (HHs) survey was conducted to collect primary data from 70 HHs, focusing on the perception of respondents on impacts of climatic variability on the feeding management. The next part consisted of understanding yield potential and nutrient composition of the four most commonly available fodder trees (M. azedirach, M. alba, F. roxburghii, F. nemoralis), within two altitudes range: (1500-2000 masl and 2000-2500 masl) by using a RCB design in 2*4 factorial combination of treatments, each replicated four times. Results revealed that majority of the farmers perceived the change in climatic phenomenon more severely within the past five years. Farmers were using different adaptation technologies such as collection of forage from jungle, reducing unproductive animals, fodder trees utilization, and crop by product feeding at feed scarcity period. Ranking of the different fodder trees on the basis of indigenous knowledge and experiences revealed that F. roxburghii was the best-preferred fodder tree species (index value 0.72) in terms overall preferability whereas M. azedirach had highest growth and productivity (index value 0.77), F. roxburghii had highest adoptability (index value 0.69) and palatability (index value 0.69) as well. Similarly, fresh yield and dry matter yield of the each fodder trees was significant (P < 0.01) between the altitude and within species. Fodder trees yield analysis revealed that the highest dry matter (DM) yield (28 kg/tree) was obtained for F. roxburghii but that remained statistically similar (P > 0.05) to the other treatment. On the other hand, most of the parameters: ether extract (EE), acid detergent lignin (ADL), acid detergent fibre (ADF), cell wall digestibility (CWD), relative digestibility (RD), digestible nutrient (TDN), and Calcium (Ca) among the treatments were highly significant (P < 0.01). This indicates the scope of introducing productive and nutritive fodder trees species even at the high altitude to help reduce fodder scarcity problem during winter. The finding also revealed the scope of promoting all available local fodder trees species as crude protein content of these species were similar.Keywords: fodder trees, yield potential, climate change, nutrient composition
Procedia PDF Downloads 310197 A Critical Analysis of the Current Concept of Healthy Eating and Its Impact on Food Traditions
Authors: Carolina Gheller Miguens
Abstract:
Feeding is, and should be, pleasurable for living beings so they desire to nourish themselves while preserving the continuity of the species. Social rites usually revolve around the table and are closely linked to the cultural traditions of each region and social group. Since the beginning, food has been closely linked with the products each region provides, and, also, related to the respective seasons of production. With the globalization and facilities of modern life we are able to find an ever increasing variety of products at any time of the year on supermarket shelves. These lifestyle changes end up directly influencing food traditions. With the era of uncontrolled obesity caused by the dazzle with the large and varied supply of low-priced to ultra-processed industrial products now in the past, today we are living a time when people are putting aside the pleasure of eating to exclusively eat food dictated by the media as healthy. Recently the medicalization of food in our society has become so present in daily life that almost without realizing we make food choices conditioned to the studies of the properties of these foods. The fact that people are more attentive to their health is interesting. However, when this care becomes an obsessive disorder, which imposes itself on the pleasure of eating and extinguishes traditional customs, it becomes dangerous for our recognition as citizens belonging to a culture and society. This new way of living generates a rupture with the social environment of origin, possibly exposing old traditions to oblivion after two or three generations. Based on these facts, the presented study analyzes these social transformations that occur in our society that triggered the current medicalization of food. In order to clarify what is actually a healthy diet, this research proposes a critical analysis on the subject aiming to understand nutritional rationality and relate how it acts in the medicalization of food. A wide bibliographic review on the subject was carried out followed by an exploratory research in online (especially social) media, a relevant source in this context due to the perceived influence of such media in contemporary eating habits. Finally, this data was crossed, critically analyzing the current situation of the concept of healthy eating and medicalization of food. Throughout this research, it was noticed that people are increasingly seeking information about the nutritional properties of food, but instead of seeking the benefits of products that traditionally eat in their social environment, they incorporate external elements that often bring benefits similar to the food already consumed. This is because the access to information is directed by the media and exalts the exotic, since this arouses more interest of the population in general. Efforts must be made to clarify that traditional products are also healthy foods, rich in history, memory and tradition and cannot be replaced by a standardized diet little concerned with the construction of taste and pleasure, having a relationship with food as if it were a Medicinal product.Keywords: food traditions, food transformations, healthy eating, medicalization of food
Procedia PDF Downloads 328196 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 89195 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 100194 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology
Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert
Abstract:
Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.Keywords: BIPV, building simulation, optimized control strategy, planning tool
Procedia PDF Downloads 110193 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen
Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin
Abstract:
Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay
Procedia PDF Downloads 90192 Nonviolent Communication and Disciplinary Area of Social Communication: Case Study on the International Circulation of Ideas from a Brazilian Perspective
Authors: Luiza Toschi
Abstract:
This work presents part of an empirical and theoretical master's degree meta-research that is interested in the relationship between the disciplinary area of Social Communication, to be investigated with the characteristics of the Bourdieusian scientific field, and the emergence of public interest in Nonviolent Communication (NVC) in Brazil and the world. To this end, the state of the art of this conceptual and practical relationship is investigated based on scientific productions available in spaces of academic credibility, such as conferences and scientific journals renowned in the field. From there, agents and the sociological aspects that make them contribute or not to scientific production in Brazil and the world are mapped. In this work, a brief dive into the international context is presented to understand if and how nonviolent communication permeates scientific production in communication in a systematic way. Using three accessible articles published between 2013 and 2022 in the 117 magazines classified as Quartiles Q1 in the Journal Ranking of Communication, the international production on the subject is compared with the Brazilian one from its context. The social conditions of the international circulation of ideas are thus discussed. Science is a product of its social environment, arising from relations of interest and power that compete in the political dimension at the same time as in the epistemological dimension. In this way, scientific choices are linked to the resources mobilized from or through the prestige and recognition of peers. In this sense, an object of interest stands out to a scientist for its academic value, but also and inseparably that which has a social interest within the collective, their social stratification, and the context of legitimacy created in their surroundings, influenced by cultural universalism. In Brazil, three published articles were found in congresses and journals that mention NVC in their abstract or keywords. All were written by Public Relations undergraduate students. Between the most experienced researchers who guided or validated the publications, it is possible to find two professionals who are interested in the Culture of Peace and Dialogy. Likewise, internationally, only three of the articles found mention the term in their abstract or title. Two analyze journalistic coverage based on the principles of NVC and Journalism for Peace. The third is from one of the Brazilian researchers identified as interested in dialogic practices, who analyses audiovisual material and promotes epistemological reflections. If, on the one hand, some characteristics inside and outside Brazil are similar: small samples, relationship with peace studies, and female researchers, two of whom are Brazilian, on the other hand, differences are obvious. If within the country, the subject is mostly Organizational Communication, outside this intersection, it is not presented explicitly. Furthermore, internationally, there is an interest in analyzing from the perspective of NVC, which has not been found so far in publications in Brazil. Up to the present moment, it is possible to presume that, universally, the legitimacy of the topic is sought by its association with conflict conciliation research and communication for peace.Keywords: academic field sociology, international circulation of ideas, meta research in communication, nonviolent communication
Procedia PDF Downloads 39191 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors
Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov
Abstract:
It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development
Procedia PDF Downloads 245190 Supermarket Shoppers Perceptions to Genetically Modified Foods in Trinidad and Tobago: Focus on Health Risks and Benefits
Authors: Safia Hasan Varachhia, Neela Badrie, Marsha Singh
Abstract:
Genetic modification of food is an innovative technology that offers a host of benefits and advantages to consumers. Consumer attitudes towards GM food and GM technologies can be identified a major determinant in conditioning market force and encouraging policy makers and regulators to recognize the significance of consumer influence on the market. This study aimed to investigate and evaluate the extent of consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks and benefit in Trinidad and Tobago, West Indies. The specific objectives of this study were to (determine consumer awareness to GM foods, ascertain their perspectives on health and safety risks and ethical issues associated with GM foods and determine whether labeling of GM foods and ingredients will influence consumers’ willingness to purchase GM foods. A survey comprising of a questionnaire consisting of 40 questions, both open-ended and close-ended was administered to 240 shoppers in small, medium and large-scale supermarkets throughout Trinidad between April-May, 2015 using convenience sampling. This survey investigated consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks/benefits. The data was analyzed using SPSS 19.0 and Minitab 16.0. One-way ANOVA investigated the effects categories of supermarkets and knowledge scores on shoppers’ awareness, knowledge, perception and acceptance of GM foods. Linear Regression tested whether demographic variables (category of supermarket, age of consumer, level of were useful predictors of consumer’s knowledge of GM foods). More than half of respondents (64.3%) were aware of GM foods and GM technologies, 28.3% of consumers indicated the presence of GM foods in local supermarkets and 47.1% claimed to be knowledgeable of GM foods. Furthermore, significant associations (P < 0.05) were observed between demographic variables (age, income, and education), and consumer knowledge of GM foods. Also, significant differences (P < 0.05) were observed between demographic variables (education, gender, and income) and consumer knowledge of GM foods. In addition, age, education, gender and income (P < 0.05) were useful predictors of consumer knowledge of GM foods. There was a contradiction as whilst 35% of consumers considered GM foods safe for consumption, 70% of consumers were wary of the unknown health risks of GM foods. About two-thirds of respondents (67.5%) considered the creation of GM foods morally wrong and unethical. Regarding GM food labeling preferences, 88% of consumers preferred mandatory labeling of GM foods and 67% of consumers specified that any food product containing a trace of GM food ingredients required mandatory GM labeling. Also, despite the declaration of GM food ingredients on food labels and the reassurance of its safety for consumption by food safety and regulatory institutions, the majority of consumers (76.1%) still preferred conventionally produced foods over GM foods. The study revealed the need to inform shoppers of the presence of GM foods and technologies, present the scientific evidence as to the benefits and risks and the need for a policy on labeling so that informed choices could be taken.Keywords: genetically modified foods, income, labeling consumer awareness, ingredients, morality and ethics, policy
Procedia PDF Downloads 329189 Production of Functional Crackers Enriched with Olive (Olea europaea L.) Leaf Extract
Authors: Rosa Palmeri, Julieta I. Monteleone, Antonio C. Barbera, Carmelo Maucieri, Aldo Todaro, Virgilio Giannone, Giovanni Spagna
Abstract:
In recent years, considerable interest has been shown in the functional properties of foods, and a relevant role has been played by phenolic compounds, able to scavenge free radicals. A more sustainable agriculture has to emerge to guarantee food supply over the next years. Wheat, corn, and rice are the most common cereals cultivated, but also other cereal species, such as barley can be appreciated for their peculiarities. Barley (Hordeum vulgare L.) is a C3 winter cereal that shows high resistance at drought and salt stresses. There are growing interests in barley as ingredient for the production of functional foods due to its high content of phenolic compounds and Beta-glucans. In this respect, the possibility of separating specific functional fractions from food industry by-products looks very promising. Olive leaves represent a quantitatively significant by-product of olive grove farming, and are an interesting source of phenolic compounds. In particular, oleuropein, which provide important nutritional benefits, is the main phenolic compound in olive leaves and ranges from 17% to 23% depending upon the cultivar and growing season period. Together with oleuropein and its derivatives (e.g. dimethyloleuropein, oleuropein diglucoside), olive leaves further contain tyrosol, hydroxytyrosol, and a series of secondary metabolities structurally related to them: verbascoside, ligstroside, hydroxytyrosol glucoside, tyrosol glucoside, oleuroside, oleoside-11-methyl ester, and nuzhenide. Several flavonoids, flavonoid glycosides, and phenolic acids have also described in olive leaves. The aim of this work was the production of functional food with higher content of polyphenols and the evaluation of their shelf life. Organic durum wheat and barley grains contain higher levels of phenolic compounds were used for the production of crackers. Olive leaf extract (OLE) was obtained from cv. ‘Biancolilla’ by aqueous extraction method. Two baked goods trials were performed with both organic durum wheat and barley flours, adding olive leaf extract. Control crackers, made as comparison, were produced with the same formulation replacing OLE with water. Total phenolic compound, moisture content, activity water, and textural properties at different time of storage were determined to evaluate the shelf-life of the products. Our the preliminary results showed that the enriched crackers showed higher phenolic content and antioxidant activity than control. Alternative uses of olive leaf extracts for crackers production could represent a good candidate for the addition of functional ingredients because bakery items are daily consumed, and have long shelf-life.Keywords: barley, functional foods, olive leaf, polyphenols, shelf life
Procedia PDF Downloads 302188 Managing Human-Wildlife Conflicts Compensation Claims Data Collection and Payments Using a Scheme Administrator
Authors: Eric Mwenda, Shadrack Ngene
Abstract:
Human-wildlife conflicts (HWCs) are the main threat to conservation in Africa. This is because wildlife needs overlap with those of humans. In Kenya, about 70% of wildlife occurs outside protected areas. As a result, wildlife and human range overlap, causing HWCs. The HWCs in Kenya occur in the drylands adjacent to protected areas. The top five counties with the highest incidences of HWC are Taita Taveta, Narok, Lamu, Kajiado, and Laikipia. The common wildlife species responsible for HWCs are elephants, buffaloes, hyenas, hippos, leopards, baboons, monkeys, snakes, and crocodiles. To ensure individuals affected by the conflicts are compensated, Kenya has developed a model of HWC compensation claims data collection and payment. We collected data on HWC from all eight Kenya Wildlife Service (KWS) Conservation Areas from 2009 to 2019. Additional data was collected from stakeholders' consultative workshops held in the Conservation Areas and a literature review regarding payment of injuries and ongoing insurance schemes being practiced in areas. This was followed by the description of the claims administration process and calculation of the pricing of the compensation claims. We further developed a digital platform for data capture and processing of all reported conflict cases and payments. Our product recognized four categories of HWC (i.e., human death and injury, property damage, crop destruction, and livestock predation). Personal bodily injury and human death were provided based on the Continental Scale of Benefits. We proposed a maximum of Kenya Shillings (KES) 3,000,000 for death. Medical, pharmaceutical, and hospital expenses were capped at a maximum of KES 150,000, as well as funeral costs at KES 50,000. Pain and suffering were proposed to be paid for 12 months at the rate of KES 13,500 per month. Crop damage was to be based on farm input costs at a maximum of KES 150,000 per claim. Livestock predation leading to death was based on Tropical Livestock Unit (TLU), which is equivalent to KES 30,000, whick includes Cattle (1 TLU = KES 30,000), Camel (1.4 TLU = KES 42,000), Goat (0.15 TLU = 4,500), Sheep (0.15 TLU = 4,500), and Donkey (0.5 TLU = KES 15,000). Property destruction (buildings, outside structures and harvested crops) was capped at KES 150,000 per any one claim. We conclude that it is possible to use an administrator to collect data on HWC compensation claims and make payments using technology. The success of the new approach will depend on a piloting program. We recommended that a pilot scheme be initiated for eight months in Taita Taveta, Kajiado, Baringo, Laikipia, Narok, and Meru Counties. This will test the claims administration process as well as harmonize data collection methods. The results of this pilot will be crucial in adjusting the scheme before country-wide roll out.Keywords: human-wildlife conflicts, compensation, human death and injury, crop destruction, predation, property destruction
Procedia PDF Downloads 55187 India’s Energy Transition, Pathways for Green Economy
Authors: B. Sudhakara Reddy
Abstract:
In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.Keywords: energy, renewables, green technology, scenario
Procedia PDF Downloads 248