Search results for: respiratory difficulty
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1327

Search results for: respiratory difficulty

127 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants

Authors: S. Ebadzadsahraei, H. Kazemian

Abstract:

The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.

Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs

Procedia PDF Downloads 116
126 An Evaluation of a First Year Introductory Statistics Course at a University in Jamaica

Authors: Ayesha M. Facey

Abstract:

The evaluation sought to determine the factors associated with the high failure rate among students taking a first-year introductory statistics course. By utilizing Tyler’s Objective Based Model, the main objectives were: to assess the effectiveness of the lecturer’s teaching strategies; to determine the proportion of students who attends lectures and tutorials frequently and to determine the impact of infrequent attendance on performance; to determine how the assigned activities assisted in students understanding of the course content; to ascertain the possible issues being faced by students in understanding the course material and obtain possible solutions to the challenges and to determine whether the learning outcomes have been achieved based on an assessment of the second in-course examination. A quantitative survey research strategy was employed and the study population was students enrolled in semester one of the academic year 2015/2016. A convenience sampling approach was employed resulting in a sample of 98 students. Primary data was collected using self-administered questionnaires over a one-week period. Secondary data was obtained from the results of the second in-course examination. Data were entered and analyzed in SPSS version 22 and both univariate and bivariate analyses were conducted on the information obtained from the questionnaires. Univariate analyses provided description of the sample through means, standard deviations and percentages while bivariate analyses were done using Spearman’s Rho correlation coefficient and Chi-square analyses. For secondary data, an item analysis was performed to obtain the reliability of the examination questions, difficulty index and discriminant index. The examination results also provided information on the weak areas of the students and highlighted the learning outcomes that were not achieved. Findings revealed that students were more likely to participate in lectures than tutorials and that attendance was high for both lectures and tutorials. There was a significant relationship between participation in lectures and performance on examination. However, a high proportion of students has been absent from three or more tutorials as well as lectures. A higher proportion of students indicated that they completed the assignments obtained from the lectures sometimes while they rarely completed tutorial worksheets. Students who were more likely to complete their assignments were significantly more likely to perform well on their examination. Additionally, students faced a number of challenges in understanding the course content and the topics of probability, binomial distribution and normal distribution were the most challenging. The item analysis also highlighted these topics as problem areas. Problems doing mathematics and application and analyses were their major challenges faced by students and most students indicated that some of the challenges could be alleviated if additional examples were worked in lectures and they were given more time to solve questions. Analysis of the examination results showed that a number of learning outcomes were not achieved for a number of topics. Based on the findings recommendations were made that suggested adjustments to grade allocations, delivery of lectures and methods of assessment.

Keywords: evaluation, item analysis, Tyler’s objective based model, university statistics

Procedia PDF Downloads 170
125 Early Return to Play in Football Player after ACL Injury: A Case Report

Authors: Nicola Milani, Carla Bellissimo, Davide Pogliana, Davide Panzin, Luca Garlaschelli, Giulia Facchinetti, Claudia Casson, Luca Marazzina, Andrea Sartori, Simone Rivaroli, Jeff Konin

Abstract:

The patient is a 26 year-old male amateur football player from Milan, Italy; (81kg; 185cm; BMI 23.6 kg/m²). He sustained a non-contact anterior cruciate ligament tear to his right knee in June 2021. In September 2021, his right knee ligament was reconstructed using a semitendinosus graft. The injury occurred during a football match on natural grass with typical shoes on a warm day (32 degrees celsius). Playing as a defender he sustained the injury during a change of direction, where the foot was fixated on the grass. He felt pain and was unable to continue playing the match. The surgeon approved his rehabilitation to begin two weeks post-operative. The initial physiotherapist assessment determined performing two training sessions per day within the first three months. In the first three weeks, the pain was 4/10 on Numerical Rating Scale (NRS), no swelling, a range of motion was 0-110°, with difficulty fully extending his knee and minimal quadriceps activation. Crutches were discontinued at four weeks with improved walking. Active exercise, electrostimulator, physical therapy, massages, osteopathy, and passive motion were initiated. At week 6, he completed his first functional movement screen; the score was 16/21 with no pain and no swelling. At week 8, the isokinetic test showed a 23% differential deficit between the two legs in maximum strength (at 90°/s). At week 10, he improved to 15% of injury-induced deficit which suggested he was ready to start running. At week 12, the athlete sustained his first threshold test. At week 16, he performed his first return to sports movement assessment, which revealed a 10% stronger difference between the legs. At week 16, he had his second threshold test. At week 17, his first on-field test revealed a 5% differential deficit between the two legs in the hop test. At week 18, isokinetic test demonstrates that the uninjured leg was 7% stronger than the recovering leg in maximum strength (at 90°/s). At week 20, his second on-field test revealed a 2% difference in hop test; at week 21, his third isokinetic test demonstrated a difference of 5% in maximum strength (at 90°/s). At week 21, he performed his second return to sports movement assessment which revealed a 2% difference between the limbs. Since it was the end of the championship, the team asked him to partake in the playoffs; moreover the player was very motivated to participate in the playoffs also because he was the captain of the team. Together with the player and the team, we decided to let him play even though we were aware of a heightened risk of injury than what is reported in the literature because of two factors: biological recovery times and the results of the tests we performed. In the decision making process about the athlete’s recovery time, it is important to balance the information available from the literature with the desires of the patient to avoid frustration.

Keywords: ACL, football, rehabilitation, return to play

Procedia PDF Downloads 93
124 Monitoring of Educational Achievements of Kazakhstani 4th and 9th Graders

Authors: Madina Tynybayeva, Sanya Zhumazhanova, Saltanat Kozhakhmetova, Merey Mussabayeva

Abstract:

One of the leading indicators of the education quality is the level of students’ educational achievements. The processes of modernization of Kazakhstani education system have predetermined the need to improve the national system by assessing the quality of education. The results of assessment greatly contribute to addressing questions about the current state of the educational system in the country. The monitoring of students’ educational achievements (MEAS) is the systematic measurement of the quality of education for compliance with the state obligatory standard of Kazakhstan. This systematic measurement is independent of educational organizations and approved by the order of the Minister of Education and Scienceof Kazakhstan. The MEAS was conducted in the regions of Kazakhstanfor the first time in 2022 by the National Testing Centre. The measurement does not have legal consequences either for students or for educational organizations. Students’ achievements were measured in three subject areas: reading, mathematics and science literacy. MEAS was held for the first time in April this year, 105 thousand students from 1436 schools of Kazakhstan took part in the testing. The monitoring was accompanied by a survey of students, teachers, and school leaders. The goal is to identify which contextual factors affect learning outcomes. The testing was carried out in a computer format. The test tasks of MEAS are ranked according to the three levels of difficulty: basic, medium, and high. Fourth graders are asked to complete 30 closed-type tasks. The average score of the results is 21 points out of 30, which means 70% of tasks were successfully completed. The total number of test tasks for 9th grade students – 75 questions. The results of ninth graders are comparatively lower, the success rate of completing tasks is 63%. MEAS participants did not reveal a statistically significant gap in results in terms of the language of instruction, territorial status, and type of school. The trend of reducing the gap in these indicators is also noted in the framework of recent international studies conducted across the country, in particular PISA for schools in Kazakhstan. However, there is a regional gap in MOES performance. The difference in the values of the indicators of the highest and lowest scores of the regions was 11% of the success of completing tasks in the 4th grade, 14% in the 9thgrade. The results of the 4th grade students in reading, mathematics, and science literacy are: 71.5%, 70%, and 66.9%, respectively. The results of ninth-graders in reading, mathematics, and science literacy are 69.6%, 54%, and 60.8%, respectively. From the surveys, it was revealed that the educational achievements of students are considerably influenced by such factors as the subject competences of teachers, as well as the school climate and motivation of students. Thus, the results of MEAS indicate the need for an integrated approach to improving the quality of education. In particular, the combination of improving the content of curricula and textbooks, internal and external assessment of the educational achievements of students, educational programs of pedagogical specialties, and advanced training courses is required.

Keywords: assessment, secondary school, monitoring, functional literacy, kazakhstan

Procedia PDF Downloads 83
123 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 346
122 Tuberculosis (TB) and Lung Cancer

Authors: Asghar Arif

Abstract:

Lung cancer has been recognized as one of the greatest common cancers, causing the annual mortality rate of about 1.2 million people in the world. Lung cancer is the most prevalent cancer in men and the third-most common cancer among women (after breast and digestive cancers).Recent evidences have shown the inflammatory process as one of the potential factors of cancer. Tuberculosis (TB), pneumonia, and chronic bronchitis are among the most important inflammation-inducing factors in the lungs, among which TB has a more profound role in the emergence of cancer.TB is one of the important mortality factors throughout the world, and 205,000 death cases are reported annually due to this disease. Chronic inflammation and fibrosis due to TB can induce genetic mutation and alternations. Parenchyma tissue of lung is involved in both diseases of TB and lung cancer, and continuous cough in lung cancer, morphological vascular variations, lymphocytosis processes, and generation of immune system mediators such as interleukins, are all among the factors leading to the hypothesis regarding the role of TB in lung cancer Some reports have shown that the induction of necrosis and apoptosis or TB reactivation, especially in patients with immune-deficiency, may result in increasing IL-17 and TNF_α, which will either decrease P53 activity or increase the expression of Bcl-2, decrease Bax-T, and cause the inhibition of caspase-3 expression due to decreasing the expression of mitochondria cytochrome oxidase. It has been also indicated that following the injection of BCG vaccine, the host immune system will be reinforced, and in particular, the rates of gamma interferon, nitric oxide, and interleukin-2 are increased. Therefore, CD4 + lymphocyte function will be improved, and the person will be immune against cancer.Numerous prospective studies have so far been conducted on the role of TB in lung cancer, and it seems that this disease is effective in that particular cancer.One of the main challenges of lung cancer is its correct and timely diagnosis. Unfortunately, clinical symptoms (such as continuous cough, hemoptysis, weight loss, fever, chest pain, dyspnea, and loss of appetite) and radiological images are similar in TB and lung cancer. Therefore, anti-TB drugs are routinely prescribed for the patients in the countries with high prevalence of TB, like Pakistan. Regarding the similarity in clinical symptoms and radiological findings of lung cancer, proper diagnosis is necessary for TB and respiratory infections due to nontuberculousmycobacteria (NTM). Some of the drug resistive TB cases are, in fact, lung cancer or NTM lung infections. Acid-fast staining and histological study of phlegm and bronchial washing, culturing and polymerase chain reaction TB are among the most important solutions for differential diagnosis of these diseases. Briefly, it is assumed that TB is one of the risk factors for cancer. Numerous studies have been conducted in this regard throughout the world, and it has been observed that there is a significant relationship between previous TB infection and lung cancer. However, to prove this hypothesis, further and more extensive studies are required. In addition, as the clinical symptoms and radiological findings of TB, lung cancer, and non-TB mycobacteria lung infections are similar, they can be misdiagnosed as TB.

Keywords: TB and lung cancer, TB people, TB servivers, TB and HIV aids

Procedia PDF Downloads 50
121 Trauma Scores and Outcome Prediction After Chest Trauma

Authors: Mohamed Abo El Nasr, Mohamed Shoeib, Abdelhamid Abdelkhalik, Amro Serag

Abstract:

Background: Early assessment of severity of chest trauma, either blunt or penetrating is of critical importance in prediction of patient outcome. Different trauma scoring systems are widely available and are based on anatomical or physiological parameters to expect patient morbidity or mortality. Up till now, there is no ideal, universally accepted trauma score that could be applied in all trauma centers and is suitable for assessment of severity of chest trauma patients. Aim: Our aim was to compare various trauma scoring systems regarding their predictability of morbidity and mortality in chest trauma patients. Patients and Methods: This study was a prospective study including 400 patients with chest trauma who were managed at Tanta University Emergency Hospital, Egypt during a period of 2 years (March 2014 until March 2016). The patients were divided into 2 groups according to the mode of trauma: blunt or penetrating. The collected data included age, sex, hemodynamic status on admission, intrathoracic injuries, and associated extra-thoracic injuries. The patients outcome including mortality, need of thoracotomy, need for ICU admission, need for mechanical ventilation, length of hospital stay and the development of acute respiratory distress syndrome were also recorded. The relevant data were used to calculate the following trauma scores: 1. Anatomical scores including abbreviated injury scale (AIS), Injury severity score (ISS), New injury severity score (NISS) and Chest wall injury scale (CWIS). 2. Physiological scores including revised trauma score (RTS), Acute physiology and chronic health evaluation II (APACHE II) score. 3. Combined score including Trauma and injury severity score (TRISS ) and 4. Chest-Specific score Thoracic trauma severity score (TTSS). All these scores were analyzed statistically to detect their sensitivity, specificity and compared regarding their predictive power of mortality and morbidity in blunt and penetrating chest trauma patients. Results: The incidence of mortality was 3.75% (15/400). Eleven patients (11/230) died in blunt chest trauma group, while (4/170) patients died in penetrating trauma group. The mortality rate increased more than three folds to reach 13% (13/100) in patients with severe chest trauma (ISS of >16). The physiological scores APACHE II and RTS had the highest predictive value for mortality in both blunt and penetrating chest injuries. The physiological score APACHE II followed by the combined score TRISS were more predictive for intensive care admission in penetrating injuries while RTS was more predictive in blunt trauma. Also, RTS had a higher predictive value for expectation of need for mechanical ventilation followed by the combined score TRISS. APACHE II score was more predictive for the need of thoracotomy in penetrating injuries and the Chest-Specific score TTSS was higher in blunt injuries. The anatomical score ISS and TTSS score were more predictive for prolonged hospital stay in penetrating and blunt injuries respectively. Conclusion: Trauma scores including physiological parameters have a higher predictive power for mortality in both blunt and penetrating chest trauma. They are more suitable for assessment of injury severity and prediction of patients outcome.

Keywords: chest trauma, trauma scores, blunt injuries, penetrating injuries

Procedia PDF Downloads 393
120 Anabasine Intoxication and its Relation to Plant Development Stages

Authors: Thaís T. Valério Caetano, João Máximo De Siqueira, Carlos Alexandre Carollo, Arthur Ladeira Macedo, Vanessa C. Stein

Abstract:

Nicotiana glauca, commonly known as wild tobacco or tobacco bush, belongs to the Solanaceae family. It is native to South America but has become naturalized in various regions, including Australia, California, Africa, and the Mediterranean. N. glauca is listed in the Global Invasive Species Database (GISD) and the Invasive Species Compendium (CABI). It is known for producing pyridine alkaloids, including anabasine, which is highly toxic. Anabasine is predominantly found in the leaves and can cause severe health issues such as neuromuscular blockade, respiratory arrest, and cardiovascular problems when ingested. Mistaken identity with edible plants like spinach has resulted in food poisoning cases in Israel and Brazil. Anabasine, a minor alkaloid constituent of tobacco, may contribute to tobacco addiction by mimicking or enhancing the effects of nicotine. Therefore, it is essential to investigate the production pattern of anabasine and its relationship to the developmental stages of the plant. This study aimed to establish the relationship between the phenological plant age, cultivation place, and the increase in anabasine concentration, which can lead to human intoxication cases. In this study, N. glauca plants were collected from three different rural areas in Brazil for a year to examine leaves at various stages of development. Samples were also obtained from cultivated plants in Marilândia, Minas Gerais, Brazil, as well as from Divinópolis, Minas Gerais, Brazil, and Arraial do Cabo, Rio de Janeiro, Brazil. In vitro cultivated plants on MS medium were included in the study. The collected leaves were dried, powdered, and stored. Alkaloid extraction was performed using a methanol and water mixture, followed by liquid-liquid extraction with chloroform. The anabasine content was determined using HPLC-DAD analysis with nicotine as a standard. The results indicated that anabasine production increases with the plant's development, peaking in adult leaves during the reproduction phase and declining afterward. In vitro, plants showed similar anabasine production to young leaves. The successful adaptation of N. glauca in new environments poses a global problem, and the correlation between anabasine production and the plant's developmental stages has been understudied. The presence of substances produced by the plant can pose a risk to other species, especially when mistaken for edible plants. The findings from this study shed light on the pattern of anabasine production and its association with plant development, contributing to a better understanding of the potential risks associated with N. glauca and the importance of accurate identification.

Keywords: nicotiana glauca graham, global invasive species database, alkaloids, toxic

Procedia PDF Downloads 57
119 Assessing Socio-economic Impacts of Arsenic and Iron Contamination in Groundwater: Feasibility of Rainwater Harvesting in Amdanga Block, North 24 Parganas, West Bengal, India

Authors: Rajkumar Ghosh

Abstract:

The present study focuses on conducting a socio-economic assessment of groundwater contamination by arsenic and iron and explores the feasibility of rainwater harvesting (RWH) as an alternative water source in the Amdanga Block of North 24 Parganas, West Bengal, India. The region is plagued by severe groundwater contamination, primarily due to excessive concentrations of arsenic and iron, which pose significant health risks to the local population. The study utilizes a mixed-methods approach, combining quantitative analysis of water samples collected from different locations within the Amdanga Block and socio-economic surveys conducted among the affected communities. The results reveal alarmingly high levels of arsenic and iron contamination in the groundwater, surpassing the World Health Organization (WHO) and Indian government's permissible limits. This contamination significantly impacts the health and well-being of the local population, leading to a range of health issues such as skin The water samples are analyzed for arsenic and iron levels, while the surveys gather data on water usage patterns, health conditions, and socio-economic factors. lesions, respiratory disorders, and gastrointestinal problems. Furthermore, the socio-economic assessment highlights the vulnerability of the affected communities due to limited access to safe drinking water. The findings reveal the adverse socio-economic implications, including increased medical expenditures, reduced productivity, and compromised educational opportunities. To address these challenges, the study explores the feasibility of rainwater harvesting as an alternative source of clean water. RWH systems have the potential to mitigate groundwater contamination by providing a sustainable and independent water supply. The assessment includes evaluating the rainwater availability, analyzing the infrastructure requirements, and estimating the potential benefits and challenges associated with RWH implementation in the study area. The findings of this study contribute to a comprehensive understanding of the socio-economic impact of groundwater contamination by arsenic and iron, emphasizing the urgency to address this critical issue in the Amdanga Block. The feasibility assessment of rainwater harvesting serves as a practical solution to ensure a safe and sustainable water supply, reducing the dependency on contaminated groundwater sources. The study's results can inform policymakers, researchers, and local stakeholders in implementing effective mitigation measures and promoting the adoption of rainwater harvesting as a viable alternative in similar arsenic and iron-contaminated regions.

Keywords: contamination, rainwater harvesting, groundwater, sustainable water supply

Procedia PDF Downloads 76
118 Effects of Dietary Polyunsaturated Fatty Acids and Beta Glucan on Maturity, Immunity, and Fry Quality of Pabdah Catfish, Ompok pabda

Authors: Zakir Hossain, Saddam Hossain

Abstract:

A nutritionally balanced diet and selection of appropriate species are important criteria in aquaculture. The present study was conducted to evaluate the effects of polyunsaturated fatty acids (PUFAs) and beta glucan-containing diets on growth performance, feed utilization, maturation, immunity, early embryonic and larval development of endangered Pabdah catfish, Ompok pabda. In this study, squid extracted lipids and mushroom powder were used as the source of PUFAs and beta-glucan, respectively, and formulated two isonitrogenous diets such as a basal or control (CON) diet and a treated (PBG) diet with maintaining 30% protein levels. During the study period, similar physicochemical conditions of water such as temperature, pH, and dissolved oxygen (DO) were 26.5±2 °C, 7.4±0.2, and 6.7±0.5 ppm, respectively, in each cistern. The results showed that final mean body weight, final mean length gain, food conversion ratio (FCR), specific growth rate (SGR), food conversion efficiency (%), hepato somatic index (HSI), kidney index (KI), and viscerosomatic index (VSI) were significantly (P<0.01 and P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet. The length-weight relationship and relative condition factor (K) of O. pabda were significantly (P<0.05) affected by the PBG diet. The gonadosomatic index (GSI), sperm viability, blood serum calcium ion concentrations (Ca²⁺), and vitellogenin level were significantly (P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet; which was used to the indication of fish maturation. During the spawning season, lipid granules and normal morphological structure were observed in the treated fish liver, whereas fewer lipid granules of liver were observed in the control group. Based on the immunity and stress resistance-related parameters such as hematological indices, antioxidant activity, lysozyme level, respiratory burst activity, blood reactive oxygen species (ROS), complement activity (ACH50 assay), specific IgM, brain AChE, plasma PGOT, and PGPT enzyme activity were significantly (P<0.01 and P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet. The fecundity, fertilization rate (92.23±2.69%), hatching rate (87.43±2.17 %), and survival (76.62±0.82%) of offspring were significantly higher (P˂0.05) in the PBG diet than in control. Consequently, early embryonic and larval development was better in PBG treated group than in control. Therefore, the present study showed that the polyunsaturated fatty acids (PUFAs) and beta-glucan enriched experimental diet were more effective and achieved better growth, feed utilization, maturation, immunity, and spawning performances of O. pabda.

Keywords: lipids, beta-glucan, fish maturity, fish immunity

Procedia PDF Downloads 75
117 Soft Pneumatic Actuators Fabricated Using Soluble Polymer Inserts and a Single-Pour System for Improved Durability

Authors: Alexander Harrison Greer, Edward King, Elijah Lee, Safa Obuz, Ruhao Sun, Aditya Sardesai, Toby Ma, Daniel Chow, Bryce Broadus, Calvin Costner, Troy Barnes, Biagio DeSimone, Yeshwin Sankuratri, Yiheng Chen, Holly Golecki

Abstract:

Although a relatively new field, soft robotics is experiencing a rise in applicability in the secondary school setting through The Soft Robotics Toolkit, shared fabrication resources and a design competition. Exposing students outside of university research groups to this rapidly growing field allows for development of the soft robotics industry in new and imaginative ways. Soft robotic actuators have remained difficult to implement in classrooms because of their relative cost or difficulty of fabrication. Traditionally, a two-part molding system is used; however, this configuration often results in delamination. In an effort to make soft robotics more accessible to young students, we aim to develop a simple, single-mold method of fabricating soft robotic actuators from common household materials. These actuators are made by embedding a soluble polymer insert into silicone. These inserts can be made from hand-cut polystyrene, 3D-printed polyvinyl alcohol (PVA) or acrylonitrile butadiene styrene (ABS), or molded sugar. The insert is then dissolved using an appropriate solvent such as water or acetone, leaving behind a negative form which can be pneumatically actuated. The resulting actuators are seamless, eliminating the instability of adhering multiple layers together. The benefit of this approach is twofold: it simplifies the process of creating a soft robotic actuator, and in turn, increases its effectiveness and durability. To quantify the increased durability of the single-mold actuator, it was tested against the traditional two-part mold. The single-mold actuator could withstand actuation at 20psi for 20 times the duration when compared to the traditional method. The ease of fabrication of these actuators makes them more accessible to hobbyists and students in classrooms. After developing these actuators, they were applied, in collaboration with a ceramics teacher at our school, to a glove used to transfer nuanced hand motions used to throw pottery from an expert artist to a novice. We quantified the improvement in the users’ pottery-making skill when wearing the glove using image analysis software. The seamless actuators proved to be robust in this dynamic environment. Seamless soft robotic actuators created by high school students show the applicability of the Soft Robotics Toolkit for secondary STEM education and outreach. Making students aware of what is possible through projects like this will inspire the next generation of innovators in materials science and robotics.

Keywords: pneumatic actuator fabrication, soft robotic glove, soluble polymers, STEM outreach

Procedia PDF Downloads 103
116 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins Performance: North Gaza Emergency Sewage Treatment Plant as Case Study

Authors: Sadi Ali, Yaser Kishawi

Abstract:

As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.

Keywords: SAT, wastewater quality, soil remediation, North Gaza

Procedia PDF Downloads 216
115 Numerical Investigation of Thermal Energy Storage Panel Using Nanoparticle Enhanced Phase Change Material for Micro-Satellites

Authors: Jelvin Tom Sebastian, Vinod Yeldho Baby

Abstract:

In space, electronic devices are constantly attacked with radiation, which causes certain parts to fail or behave in unpredictable ways. To advance the thermal controllability for microsatellites, we need a new approach and thermal control system that is smaller than that on conventional satellites and that demand no electric power. Heat exchange inside the microsatellites is not that easy as conventional satellites due to the smaller size. With slight mass gain and no electric power, accommodating heat using phase change materials (PCMs) is a strong candidate for solving micro satellites' thermal difficulty. In other words, PCMs can absorb or produce heat in the form of latent heat, changing their phase and minimalizing the temperature fluctuation around the phase change point. The main restriction for these systems is thermal conductivity weakness of common PCMs. As PCM is having low thermal conductivity, it increases the melting and solidification time, which is not suitable for specific application like electronic cooling. In order to increase the thermal conductivity nanoparticles are introduced. Adding the nanoparticles in base PCM increases the thermal conductivity. Increase in weight concentration increases the thermal conductivity. This paper numerically investigates the thermal energy storage panel with nanoparticle enhanced phase change material. Silver nanostructure have increased the thermal properties of the base PCM, eicosane. Different weight concentration (1, 2, 3.5, 5, 6.5, 8, 10%) of silver enhanced phase change material was considered. Both steady state and transient analysis was performed to compare the characteristics of nanoparticle enhanced phase material at different heat loads. Results showed that in steady state, the temperature near the front panel reduced and temperature on NePCM panel increased as the weight concentration increased. With the increase in thermal conductivity more heat was absorbed into the NePCM panel. In transient analysis, it was found that the effect of nanoparticle concentration on maximum temperature of the system was reduced as the melting point of the material reduced with increase in weight concentration. But for the heat load of maximum 20W, the model with NePCM did not attain the melting point temperature. Therefore it showed that the model with NePCM is capable of holding more heat load. In order to study the heat load capacity double the load is given, maximum of 40W was given as first half of the cycle and the other is given constant OW. Higher temperature was obtained comparing the other heat load. The panel maintained a constant temperature for a long duration according to the NePCM melting point. In both the analysis, the uniformity of temperature of the TESP was shown. Using Ag-NePCM it allows maintaining a constant peak temperature near the melting point. Therefore, by altering the weight concentration of the Ag-NePCM it is possible to create an optimum operating temperature required for the effective working of the electronics components.

Keywords: carbon-fiber-reinforced polymer, micro/nano-satellite, nanoparticle phase change material, thermal energy storage

Procedia PDF Downloads 187
114 Effects of Exposure to a Language on Perception of Non-Native Phonologically Contrastive Duration

Authors: Chuyu Huang, Itsuki Minemi, Kuanlin Chen, Yuki Hirose

Abstract:

It remains unclear how language speakers are able to perceive phonological contrasts that do not exist on their own. This experiment uses the vowel-length distinction in Japanese, which is phonologically contrastive and co-occurs with tonal change in some cases. For speakers whose first language does not distinguish vowel length, contrastive duration is usually misperceived, e.g., Mandarin speakers. Two alternative hypotheses for how Mandarin speakers would perceive a phonological contrast that does not exist in their language make different predictions. The stress parameter model does not have a clear prediction about the impact of tonal type. Mandarin speakers will likely be not able to perceive vowel length as well as Japanese native speakers do, but the performance might not correlate to tonal type because the prosody of their language is distinctive, which requires users to encode lexical prosody and notice subtle differences in word prosody. By contrast, cue-based phonetic models predict that Mandarin speakers may rely on pitch differences, a secondary cue, to perceive vowel length. Two groups of Mandarin speakers, including naive non-Japanese speakers and beginner learners, were recruited to participate in an AX discrimination task involving two Japanese sound stimuli that contain a phonologically contrastive environment. Participants were asked to indicate whether the two stimuli containing a vowel-length contrast (e.g., maapero vs. mapero) sound the same. The experiment was bifactorial. The first factor contrasted three syllabic positions (syllable position; initial/medial/final), as it would be likely to affect the perceptual difficulty, as seen in previous studies, and the second factor contrasted two pitch types (accent type): one with accentual change that could be distinguished with the lexical tones in Mandarin (the different condition), with the other group having no tonal distinction but only differing in vowel length (the same condition). The overall results showed that a significant main effect of accent type by applying a linear mixed-effects model (β = 1.48, SE = 0.35, p < 0.05), which implies that Mandarin speakers tend to more successfully recognize vowel-length differences when the long vowel counterpart takes on a tone that exists in Mandarin. The interaction between the accent type and the syllabic position is also significant (β = 2.30, SE = 0.91, p < 0.05), showing that vowel lengths in the different conditions are more difficult to recognize in the word-final case relative to the initial condition. The second statistical model, which compares naive speakers to beginners, was conducted with logistic regression to test the effects of the participant group. A significant difference was found between the two groups (β = 1.06, 95% CI = [0.36, 2.03], p < 0.05). This study shows that: (1) Mandarin speakers are likely to use pitch cues to perceive vowel length in a non-native language, which is consistent with the cue-based approaches; (2) an exposure effect was observed: the beginner group achieved a higher accuracy for long vowel perception, which implied the exposure effect despite the short period of language learning experience.

Keywords: cue-based perception, exposure effect, prosodic perception, vowel duration

Procedia PDF Downloads 202
113 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil

Authors: Thiago R. Pereira

Abstract:

The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.

Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary

Procedia PDF Downloads 315
112 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 111
111 The Ethics of Physical Restraints in Geriatric Care

Authors: Bei Shan Lin, Chun Mei Lu, Ya Ping Chen, Li Chen Lu

Abstract:

This study explores the ethical issues concerning the use of physical restraint in geriatric care. Physical restraint use in a medical care setting is seen as a controversial form of treatment that has occurred over decades. There is no doubt that people nowadays are living longer than previous generations. The ageing process is inevitable. Common disease such as impaired comprehension, memory loss, and trouble expressing one’s self contribute to the difficulty that these older patients have in adapting to medical institution. For these reasons, physical restraint is often used in reducing the risk of falling, managing wandering behaviour, preventing agitation, and promoting patient compliance in geriatric care. It can mean that physical restraints are considered as a common practice that is used in the care of older patients. It is most commonly used for three specific purposes, including procedural restraint, restraint to prevent falls, and behavioural restraints. Although there have been well documented instances of morbidity and mortality recognised as being potential risks associated with physical restraint use, it continues to be permitted and used in healthcare, often in the name of safety. However, there is insufficient evidence supporting the effectiveness of physical restraint use reducing injuries from falls and controlling challenging behaviour in geriatric care settings. There is barely any empirical evidence of either a scientific basis or clinical trials have evaluated the improvement in patient safety following physical restraint. In difficult clinical situations, guidelines and practical suggestions for Healthcare professionals to comply requirements can help those making appropriate decisions and to facilitate better judgement regarding physical restraint use. The following recommendations are given for physical restraint use in long-term care settings: an interdisciplinary team approach to assess, evaluate, and treat underlying diseases to determine if treatment can ease issues precipitating physical restraint use; a clearly stated purpose of treatment plan should be made after weighing up the risk of physical restraint use against the risk of without physical restraint use; a care plan for physical restraint has to include individualised treatment planning, informed consent, identification and remedial action to avoid negative consequences, regular assessment and modification, reduction and removal of risks; patients and their families must have the opportunity to consider and give voluntary informed consent prior to physical restraint utilisation; patients, family members, and Healthcare professionals should be educated on use and adverse consequences of physical restraints in order to make raise awareness of potential risks and to take appropriate steps to prevent unnecessary harm; after physical restraint removal, Healthcare professionals should discuss with patients and family members about their experience, feelings, and any anxieties regarding the treatment. Physical restraint should always be considered a last resort as deprive patient’s freedom, control, and individuality. Healthcare professionals should emphasise on providing individualized care, interdisciplinary decision-making process, and creative and collaborative alternatives to promote older patient’s rights, dignity and overall well-being as much as possible.

Keywords: ethics healthcare, geriatric care, healthcare, physical restraint

Procedia PDF Downloads 116
110 Anabasine Intoxication and Its Relation to Plant Develoment Stages

Authors: Thaís T. Valério Caetano, Lívia de Carvalho Ferreira, João Máximo De Siqueira, Carlos Alexandre Carollo, Arthur Ladeira Macedo, Vanessa C. Stein

Abstract:

Nicotiana glauca, commonly known as wild tobacco or tobacco bush, belongs to the Solanaceae family. It is native to South America but has become naturalized in various regions, including Australia, California, Africa, and the Mediterranean. N. glauca is listed in the Global Invasive Species Database (GISD) and the Invasive Species Compendium (CABI). It is known for producing pyridine alkaloids, including anabasine, which is highly toxic. Anabasine is predominantly found in the leaves and can cause severe health issues such as neuromuscular blockade, respiratory arrest, and cardiovascular problems when ingested. Mistaken identity with edible plants like spinach has resulted in food poisoning cases in Israel and Brazil. Anabasine, a minor alkaloid constituent of tobacco, may contribute to tobacco addiction by mimicking or enhancing the effects of nicotine. Therefore, it is essential to investigate the production pattern of anabasine and its relationship to the developmental stages of the plant. This study aimed to establish the relationship between the phenological plant age, cultivation place, and the increase in anabasine concentration, which can lead to human intoxication cases. In this study, N. glauca plants were collected from three different rural areas in Brazil during a year to examine leaves at various stages of development. Samples were also obtained from cultivated plants in Marilândia, Minas Gerais, Brazil, as well as from Divinópolis, Minas Gerais, Brazil, and Arraial do Cabo, Rio de Janeiro, Brazil. In vitro cultivated plants on MS medium were included in the study. The collected leaves were dried, powdered, and stored. Alkaloid extraction was performed using a methanol and water mixture, followed by liquid-liquid extraction with chloroform. The anabasine content was determined using HPLC-DAD analysis with nicotine as a standard. The results indicated that anabasine production increases with the plant's development, peaking in adult leaves during the reproduction phase and declining afterward. In vitro, plants showed similar anabasine production to young leaves. The successful adaptation of N. glauca in new environments poses a global problem, and the correlation between anabasine production and the plant's developmental stages has been understudied. The presence of substances produced by the plant can pose a risk to other species, especially when mistaken for edible plants. The findings from this study shed light on the pattern of anabasine production and its association with plant development, contributing to a better understanding of the potential risks associated with N. glauca and the importance of accurate identification.

Keywords: alkaloid production, invasive species, nicotiana glauca, plant phenology

Procedia PDF Downloads 57
109 An Investigation into Why Very Few Small Start-Ups Business Survive for Longer Than Three Years: An Explanatory Study in the Context of Saudi Arabia

Authors: Motaz Alsolaim

Abstract:

Nowadays, the challenges of running a start-up can be very complex and are perhaps more difficult than at any other time in the past. Changes in technology, manufacturing innovation, and product development, combined with intense competition and market regulations are factors that have put pressure on classic ways of managing firms, thereby forcing change. As a result, the rate of closure, exit or discontinuation of start-ups and young businesses is very high. Despite the essential role of small firms in an economy, they still tend to face obstacles that exert a negative influence on their performance and rate of survival. In fact, it is not easy to determine with any certainty the reasons why small firms fail. For this reason, failure itself is not clearly defined, and its exact causes are hard to diagnose. In this current study, therefore, the barriers to survival will be covered more broadly, especially personal/entrepreneurial, enterprise and environmental factors with regard to various possible reasons for this failure, in order to determine the best solutions and make appropriate recommendations. Methodology: It could be argued that mixed methods might help to improve entrepreneurship research addressing challenges emphasis in previous studies and to achieve the triangulation. Calls for the combined use of quantitative and qualitative research were also made in the entrepreneurship field since entrepreneurship is a multi-faceted area of research. Therefore, explanatory sequential mixed method was used, using questionnaire online survey for entrepreneurs, followed by semi-structure interview. Collecting over 750 surveys and accepting 296 valid surveys, after that 13 interviews from government official seniors, businessmen successful entrepreneurs, and non-successful entrepreneurs. Findings: The first phase findings ( quantitative) shows the obstacles to survive; starting from the personal/ entrepreneurial factors such as; past work experience, lack of skills and interest, are positive factors, while; gender, age and education level of the owner are negative factors. Internal factors such as lack of marketing research and weak business planning are positive. The environmental factors; in economic perspectives; difficulty to find labors, in socio-cultural perspectives; Social restriction and traditions found to be a negative factors. In other hand, from the political perspective; cost of compliance and insufficient government plans found to be a positive factors for small business failure. From infrastructure perspective; lack of skills labor, high level of bureaucracy and lack of information are positive factors. Conclusion: This paper serves to enrich the understanding of failure factors in MENA region more precisely in SA, by minimizing the probability of failure in small-micro entrepreneurial start-up in SA, in the light of the Saudi government’s Vision 2030 plan.

Keywords: small business barriers, start-up business, entrepreneurship, Saudi Arabia

Procedia PDF Downloads 164
108 Prevalence of Hemorrhagic Septicemia in Dromedary Camel (Camelus Dromedarius) For Some Selected Farms in Benadir Region, Somalia

Authors: Abdirahman Barre, Abdihamid Salad Hassan, Iftin Abdi Mohamud, Abdirahman Mohamed Mohamud, Ahmed Adan Mohamed, Mukhtaar Mohamed Idow

Abstract:

Pasteurellosis (Hemorrhagic septicemia) is a common respiratory disease of camel that is an acutely fatal disease caused by Pasteurella multocida type A or several serotypes of Mannheimia hemolytic, which also affect other animals. The disease had shown to spread between animals, across herds and to humans. Meaning that the disease is Zoonosis. The study aimed at establishment of sero-prevalence of Pasteurellosis in some selected Districts of camel rearing in the Benadir Region. It was a cross-sectional study, where the study population was purposively chosen to consist of animals taken within three sub-Districts of Benadir Region, namely Sub-District (Daynile Township), Sub-District (Yaaqshid) Sub-District (kaxda). This was because they normally handle many camels in a day, thus making it easy for the investigator to access the required number conveniently; it was also assumed that data collected from these for-slaughter camels was representative of the situation in the sub-District/county. A total of one hundred and sixty camels were tested using four serological tests: Rose Bengal Plate Test (RBPT),) and Complex Fixation Test (CFT). The serological tests were purposively chosen to increase the chances of picking positive cases and also to compare their sensitivities with respect to camel serum since they were originally meant for use on bovine serum. Blood samples (15 ml) were collected for serum harvesting from the jugular veins of the animals as they were waiting to be examined. Rose Bengal plate test and CFT were run at a laboratory within the Department of Veterinary Medicine, University of Horsed, 21 October campus; serum samples having been transported in a cool box. On average, out of an overall total of 300 serum samples tested, 180 samples were selected as sample procedures and were given eleven (11) positive results, amounting to a prevalence of 6.67%. For the three Districts, respective prevalence (averaged from the two (2) serological tests run) were: 7% (3/50) for Yaqshiid; 8% (3/60) for Deyniile and 10% (3/70) for Kaxda. When sensitivities of the two (2) serological tests were compared, there was no significant difference between them with respect to the picking of positive cases (p=0.05). The study has demonstrated presence of Pasterolosis in camels in Benadir Region and the authors are recommending the usage of RBPT and CFT as screening tests, since they are cheap, quick, and easy to carry out. Any of the other three involving tests can then be used if one wants to establish respective titers. Therefore, further detailed investigation needs to be conducted so as to understand specific etiological agents causing pasteurollosis in camel and can be instituted to optimize the benefit obtained from the camel sector.

Keywords: hemorrhagic septicemia, camel, prevalence, Benadir region, Somalia

Procedia PDF Downloads 39
107 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 238
106 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case

Authors: Henry Gonzalo Acurio Flores, Alvaro Nicolas Corral Naveda, Juan Francisco Fonseca Palacios

Abstract:

In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility. 1) A Business as Usual BAU scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies at the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP) and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.

Keywords: electro mobility, energy, policy, sustainable transportation

Procedia PDF Downloads 52
105 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins of North Gaza Emergency Sewage Treatment Plant as Case Study

Authors: Sadi Ali, Yaser Kishawi

Abstract:

As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l, and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, North Gaza

Procedia PDF Downloads 217
104 Kidney Supportive Care in Canada: A Constructivist Grounded Theory of Dialysis Nurses’ Practice Engagement

Authors: Jovina Concepcion Bachynski, Lenora Duhn, Idevania G. Costa, Pilar Camargo-Plazas

Abstract:

Kidney failure is a life-limiting condition for which treatment, such as dialysis (hemodialysis and peritoneal dialysis), can exact a tremendously high physical and psychosocial symptom burden. Kidney failure can be severe enough to require a palliative approach to care. The term supportive care can be used in lieu of palliative care to avoid the misunderstanding that palliative care is synonymous with end-of-life or hospice care. Kidney supportive care, encompassing advance care planning, is an approach to care that improves the quality of life for people receiving dialysis through early identification and treatment of symptoms throughout the disease trajectory. Advanced care planning involves ongoing conversations about the values, goals, and preferences for future care between individuals and their healthcare teams. Kidney supportive care is underutilized and often initiated late in this population. There is evidence to indicate nurses are not providing the necessary elements of supportive kidney care. Dialysis nurses’ delay or lack of engagement in supportive care until close to the end of life may result in people dying without receiving optimal palliative care services. Using Charmaz’s constructivist grounded theory, the purpose of this doctoral study is to develop a substantive theory that explains the process of engagement in supportive care by nurses working in dialysis settings in Canada. Through initial purposeful and subsequent theoretical sampling, 23 nurses with current or recent work experience in outpatient hemodialysis, home hemodialysis, and peritoneal dialysis settings drawn from across Canada were recruited to participate in two intensive interviews using the Zoom© teleconferencing platform. Concurrent data collection and data analysis, constant comparative analysis of initial and focused codes until the attainment of theoretical saturation, and memo-writing, as well as researcher reflexivity, have been undertaken to aid the emergence of concepts, categories, and, ultimately, the constructed theory. At the time of abstract submission, data analysis is currently at the second level of coding (i.e., focused coding stage) of the research study. Preliminary categories include: (a) focusing on biomedical care; (b) multi-dimensional challenges to having the conversation; (c) connecting and setting boundaries with patients; (d) difficulty articulating kidney-supportive care; and (e) unwittingly practising kidney-supportive care. For the conference, the resulting theory will be presented. Nurses working in dialysis are well-positioned to ensure the delivery of quality kidney-supportive care. This study will help to determine the process and the factors enabling and impeding nurse engagement in supportive care in dialysis to effect change for normalizing advance care planning conversations in the clinical setting. This improved practice will have substantive beneficial implications for the many individuals living with kidney failure and their supporting loved ones.

Keywords: dialysis, kidney failure, nursing, supportive care

Procedia PDF Downloads 82
103 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 138
102 Osteosuture in Fixation of Displaced Lateral Third Clavicle Fractures: A Case Report

Authors: Patrícia Pires, Renata Vaz, Bárbara Teles, Marco Pato, Pedro Beckert

Abstract:

Introduction: The management of lateral third clavicle fractures can be challenging due to difficulty in distinguishing subtle variations in the fracture pattern, which may be suggestive of potential fracture instability. They occur most often in men between 30 and 50 years of age, and in individuals over 70 years of age, its distribution is equal between both men and women. These fractures account for 10%–30% of all clavicle fractures and roughly 30%–45% of all clavicle nonunion fractures. Lateral third clavicle fractures may be treated conservatively or surgically, and there is no gold standard, although the risk of nonunion or pseudoarthrosis impacts the recommendation of surgical treatment when these fractures are unstable. There are many strategies for surgical treatment, including locking plates, hook plates fixation, coracoclavicular fixation using suture anchors, devices or screws, tension band fixation with suture or wire, transacromial Kirschner wire fixation and arthroscopically assisted techniques. Whenever taking the hardware into consideration, we must not disregard that obtaining adequate lateral fixation of small fragments is a difficult task, and plates are more associated to local irritation. The aim of the appropriate treatment is to ensure fracture healing and a rapid return to preinjury activities of daily living but, as explained, definitive treatment strategies have not been established and the variety of techniques avalilable add up to the discussion of this topic. Methods and Results: We present a clinical case of a 43-year-old man with the diagnosis of a lateral third clavicle fracture (Neer IIC) in the sequence of a fall on his right shoulder after a bicycle fall. He was operated three days after the injury, and through K-wire temporary fixation and indirect reduction using a ZipTight, he underwent osteosynthesis with an interfragmentary figure-of-eight tension band with polydioxanone suture (PDS). Two weeks later, there was a good aligment. He kept the sling until 6 weeks pos-op, avoiding efforts. At 7-weeks pos-op, there was still a good aligment, starting the physiotherapy exercises. After 10 months, he had no limitation in mobility or pain and returned to work with complete recovery in strength. Conclusion: Some distal clavicle fractures may be conservatively treated, but it is widely accepted that unstable fractures require surgical treatment to obtain superior clinical outcomes. In the clinical case presented, the authors chose an osteosuture technique due to the fracture pattern, its location. Since there isn´t a consensus on the prefered fixation method, it is important for surgeons to be skilled in various techniques and decide with their patient which approach is most appropriate for them, weighting the risk-benefit of each method. For instance, with the suture technique used, there is no wire migration or breakage, and it doesn´t require a reoperation for hardware removal; there is also less tissue exposure since it requires a smaller approach in comparison to the plate fixation and avoids cuff tears like the hook plate. The good clinical outcome on this case report serves the purpose of expanding the consideration of this method has a therapeutic option.

Keywords: lateral third, clavicle, suture, fixation

Procedia PDF Downloads 44
101 Spatial Variability of Soil Metal Contamination to Detect Cancer Risk Zones in Coimbatore Region of India

Authors: Aarthi Mariappan, Janani Selvaraj, P. B. Harathi, M. Prashanthi Devi

Abstract:

Anthropogenic modification of the urban environment has largely increased in the recent years in order to sustain the growing human population. Intense industrial activity, permanent and high traffic on the roads, a developed subterranean infrastructure network, land use patterns are just some specific characteristics. Every day, the urban environment is polluted by more or less toxic emissions, organic or metals wastes discharged from specific activities such as industrial, commercial, municipal. When these eventually deposit into the soil, the physical and chemical properties of the surrounding soil is changed, transforming it into a human exposure indicator. Metals are non-degradable and occur cumulative in soil due to regular deposits are a result of permanent human activity. Due to this, metals are a contaminant factor for soil when persistent over a long period of time and a possible danger for inhabitant’s health on prolonged exposure. Metals accumulated in contaminated soil may be transferred to humans directly, by inhaling the dust raised from top soil, or by ingesting, or by dermal contact and indirectly, through plants and animals grown on contaminated soil and used for food. Some metals, like Cu, Mn, Zn, are beneficial for human’s health and represent a danger only if their concentration is above permissible levels, but other metals, like Pb, As, Cd, Hg, are toxic even at trace level causing gastrointestinal and lung cancers. In urban areas, metals can be emitted from a wide variety of sources like industrial, residential, commercial activities. Our study interrogates the spatial distribution of heavy metals in soil in relation to their permissible levels and their association with the health risk to the urban population in Coimbatore, India. Coimbatore region is a high cancer risk zone and case records of gastro intestinal and respiratory cancer patients were collected from hospitals and geocoded in ArcGIS10.1. The data of patients pertaining to the urban limits were retained and checked for their diseases history based on their diagnosis and treatment. A disease map of cancer was prepared to show the disease distribution. It has been observed that in our study area Cr, Pb, As, Fe and Mg exceeded their permissible levels in the soil. Using spatial overlay analysis a relationship between environmental exposure to these potentially toxic elements in soil and cancer distribution in Coimbatore district was established to show areas of cancer risk. Through this, our study throws light on the impact of prolonged exposure to soil contamination in soil in the urban zones, thereby exploring the possibility to detect cancer risk zones and to create awareness among the exposed groups on cancer risk.

Keywords: soil contamination, cancer risk, spatial analysis, India

Procedia PDF Downloads 381
100 Cultural Knowledge Transfer of the Inherited Karen Backstrap Weaving for the 4th Generation of a Pwo Karen Community

Authors: Suphitcha Charoen-Amornkitt, Chokeanand Bussracumpakorn

Abstract:

The tendency of the Karen backstrap weaving succession has gradually decreased due to the difficulty of weaving techniques and the relocation of the young generation. The Yang Nam Klat Nuea community, Nong Ya Plong District, Phetchaburi, is a Pwo Karen community that is seriously confronted with a lack of cultural heritage. Thus, a group of weavers was formed to revive the knowledge of weaving. However, they have been gradually confronted with culture assimilation to mainstream culture from the desire for marketing acceptance and imperative and forced the extinction of culture due to the disappearance of weaving details and techniques. Although there are practical solutions, i.e., product development, community improvement, knowledge improvement, and knowledge transfer, to inherit the Karen weaving culture, people in the community cannot fulfill their deep intention about the weaving inheritance as most solutions have focused on developing the commercial products and making the income instead of inheriting their knowledge. This research employed qualitative user research with an in-depth user interview to study communal knowledge transfer succession based on the internal involved parties, i.e., four expert weavers, three young weavers, and three 4th generation villagers. The purpose is to explore the correlation and mindset of villagers towards the culture with specific issues, including the psychology of culture, core knowledge and learning methods, cultural inheritance, and cultural engagement. As a result, the existing models of knowledge management mostly focused on tangible strategies, which can notice progress in short terms, such as direct teaching and consistent practicing. At the same time, the motivation and passion of inheritors were abolished while the research found that the young generation who profoundly connected with the textile culture will have a more significant intention to continue the culture. Therefore, this research suggests both internal and external solutions to treat the community. Regarding the internal solutions, family, weaving group, and school have an important role to participate with young villagers by encouraging activities to support the cultivating of Karen’s history, understanding their identities, and adapting the culture as a part of daily life. At the same time, collecting all of the knowledge in the archives, e.g., recorded video, instruction, and books, can crucially prevent the culture from extinction. Regarding the external solutions, this study suggests that working with social media will enhance the intimacy of textile culture, while the community should relieve the roles in marketing competition and start to drive cultural experiences to create a new market position. In conclusion, this research intends to explore the causes and motivation to support the transfer of the culture to the 4th generation villagers and to raise awareness of the diversity of culture in society. With these suggestions and the desire to improve pride and confidence in culture, the community agrees that strengthening the relationships between the young villagers and the weaving culture can bring attention and interest back to the weaving culture.

Keywords: Pwo Karen textile culture, backstrap weaving succession, cultural inheritance, knowledge transfer, knowledge management

Procedia PDF Downloads 64
99 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year

Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans

Abstract:

Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.

Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality

Procedia PDF Downloads 290
98 Scenarios of Digitalization and Energy Efficiency in the Building Sector in Brazil: 2050 Horizon

Authors: Maria Fatima Almeida, Rodrigo Calili, George Soares, João Krause, Myrthes Marcele Dos Santos, Anna Carolina Suzano E. Silva, Marcos Alexandre Da

Abstract:

In Brazil, the building sector accounts for 1/6 of energy consumption and 50% of electricity consumption. A complex sector with several driving actors plays an essential role in the country's economy. Currently, the digitalization readiness in this sector is still low, mainly due to the high investment costs and the difficulty of estimating the benefits of digital technologies in buildings. Nevertheless, the potential contribution of digitalization for increasing energy efficiency in the building sector in Brazil has been pointed out as relevant in the political and sectoral contexts, both in the medium and long-term horizons. To contribute to the debate on the possible evolving trajectories of digitalization in the building sector in Brazil and to subsidize the formulation or revision of current public policies and managerial decisions, three future scenarios were created to anticipate the potential energy efficiency in the building sector in Brazil due to digitalization by 2050. This work aims to present these scenarios as a basis to foresight the potential energy efficiency in this sector, according to different digitalization paces - slow, moderate, or fast in the 2050 horizon. A methodological approach was proposed to create alternative prospective scenarios, combining the Global Business Network (GBN) and the Laboratory for Investigation in Prospective Strategy and Organisation (LIPSOR) methods. This approach consists of seven steps: (i) definition of the question to be foresighted and time horizon to be considered (2050); (ii) definition and classification of a set of key variables, using the prospective structural analysis; (iii) identification of the main actors with an active role in the digital and energy spheres; (iv) characterization of the current situation (2021) and identification of main uncertainties that were considered critical in the development of alternative future scenarios; (v) scanning possible futures using morphological analysis; (vi) selection and description of the most likely scenarios; (vii) foresighting the potential energy efficiency in each of the three scenarios, namely slow digitalization; moderate digitalization, and fast digitalization. Each scenario begins with a core logic and then encompasses potentially related elements, including potential energy efficiency. Then, the first scenario refers to digitalization at a slow pace, with induction by the government limited to public buildings. In the second scenario, digitalization is implemented at a moderate pace, induced by the government in public, commercial, and service buildings, through regulation integrating digitalization and energy efficiency mechanisms. Finally, in the third scenario, digitalization in the building sector is implemented at a fast pace in the country and is strongly induced by the government, but with broad participation of private investments and accelerated adoption of digital technologies. As a result of the slow pace of digitalization in the sector, the potential for energy efficiency stands at levels below 10% of the total of 161TWh by 2050. In the moderate digitalization scenario, the potential reaches 20 to 30% of the total 161TWh by 2050. Furthermore, in the rapid digitalization scenario, it will reach 30 to 40% of the total 161TWh by 2050.

Keywords: building digitalization, energy efficiency, scenario building, prospective structural analysis, morphological analysis

Procedia PDF Downloads 86