Search results for: time – varying feed back
127 Transformation of Aluminum Unstable Oxyhydroxides in Ultrafine α-Al2O3 in Presence of Various Seeds
Authors: T. Kuchukhidze, N. Jalagonia, Z. Phachulia, R. Chedia
Abstract:
Ceramic obtained on the base of aluminum oxide has wide application range, because it has unique properties, for example, wear-resistance, dielectric characteristics, and exploitation ability at high temperatures and in corrosive atmosphere. Low temperature synthesis of α-Al2O3 is energo-economical process and it is topical for developing technologies of corundum ceramics fabrication. In the present work possibilities of low temperature transformation of oxyhydroxides in α-Al2O3, during the presence of small amount of rare–earth elements compounds (also Th, Re), have been discussed. Aluminum unstable oxyhydroxides have been obtained by hydrolysis of aluminium isopropoxide, nitrates, sulphate, and chloride in alkaline environment at 80-90ºC temperatures. β-Al(OH)3 has been received from aluminum powder by ultrasonic development. Drying of oxyhydroxide sol has been conducted with presence of various types seeds, which amount reaches 0,1-0,2% (mas). Neodymium, holmium, thorium, lanthanum, cerium, gadolinium, disprosium nitrates and rhenium carbonyls have been used as seeds and they have been added to the sol specimens in amount of 0.1-0.2% (mas) calculated on metals. Annealing of obtained gels is carried out at 70– 1100ºC for 2 hrs. The same specimen transforms in α-Al2O3 at 1100ºC. At this temperature in case of presence of lanthanum and gadolinium transformation takes place by 70-85%. In case of presence of thorium stabilization of γ-and θ-phases takes place. It is established, that thorium causes inhibition of α-phase generation at 1100ºC, and at the time when in all other doped specimens α-phase is generated at lower temperatures (1000-1050ºC). Synthesis of various type compounds and simultaneous consolidation has developed in the furnace of OXY-GON. Composite materials containing oxide and non-oxide components close to theoretical data have been obtained in this furnace respectively. During the work the following devices have been used: X-ray diffractometer DRON-3M (Cu-Kα, Ni filter, 2º/min), High temperature vacuum furnace OXY-GON, electronic scanning microscopes Nikon ECLIPSE LV 150, NMM-800TRF, planetary mill Pulverisette 7 premium line, SHIMADZU Dynamic Ultra Micro Hardness Tester, DUH-211S, Analysette 12 Dyna sizer.Keywords: α-Alumina, combustion, consolidation, phase transformation, seeding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4084126 High Efficiency Solar Thermal Collectors Utilization in Process Heat: A Case Study of Textile Finishing Industry
Authors: Gökçen A. Çiftçioğlu, M. A. Neşet Kadırgan, Figen Kadırgan
Abstract:
Solar energy, since it is available every day, is seen as one of the most valuable renewable energy resources. Thus, the energy of sun should be efficiently used in various applications. The most known applications that use solar energy are heating water and spaces. High efficiency solar collectors need appropriate selective surfaces to absorb the heat. Selective surfaces (Selektif-Sera) used in this study are applied to flat collectors, which are produced by a roll to roll cost effective coating of nano nickel layers, developed in Selektif Teknoloji Co. Inc. Efficiency of flat collectors using Selektif-Sera absorbers are calculated in collaboration with Institute for Solar Technik Rapperswil, Switzerland. The main cause of high energy consumption in industry is mostly caused from low temperature level processes. There is considerable effort in research to minimize the energy use by renewable energy sources such as solar energy. A feasibility study will be presented to obtain the potential of solar thermal energy utilization in the textile industry using these solar collectors. For the feasibility calculations presented in this study, textile dyeing and finishing factory located at Kahramanmaras is selected since the geographic location was an important factor. Kahramanmaras is located in the south east part of Turkey thus has a great potential to have solar illumination much longer. It was observed that, the collector area is limited by the available area in the factory, thus a hybrid heating generating system (lignite/solar thermal) was preferred in the calculations of this study to be more realistic. During the feasibility work, the calculations took into account the preheating process, where well waters heated from 15 °C to 30-40 °C by using the hot waters in heat exchangers. Then the preheated water was heated again by high efficiency solar collectors. Economic comparison between the lignite use and solar thermal collector use was provided to determine the optimal system that can be used efficiently. The optimum design of solar thermal systems was studied depending on the optimum collector area. It was found that the solar thermal system is more economic and efficient than the merely lignite use. Return on investment time is calculated as 5.15 years.
Keywords: Solar energy, heating, solar heating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240125 Wind Energy Development in the African Great Lakes Region to Supplement the Hydroelectricity in the Locality: A Case Study from Tanzania
Authors: R.M. Kainkwa
Abstract:
The African Great Lakes Region refers to the zone around lakes Victoria, Tanganyika, Albert, Edward, Kivu, and Malawi. The main source of electricity in this region is hydropower whose systems are generally characterized by relatively weak, isolated power schemes, poor maintenance and technical deficiencies with limited electricity infrastructures. Most of the hydro sources are rain fed, and as such there is normally a deficiency of water during the dry seasons and extended droughts. In such calamities fossil fuels sources, in particular petroleum products and natural gas, are normally used to rescue the situation but apart from them being nonrenewable, they also release huge amount of green house gases to our environment which in turn accelerates the global warming that has at present reached an amazing stage. Wind power is ample, renewable, widely distributed, clean, and free energy source that does not consume or pollute water. Wind generated electricity is one of the most practical and commercially viable option for grid quality and utility scale electricity production. However, the main shortcoming associated with electric wind power generation is fluctuation in its output both in space and time. Before making a decision to establish a wind park at a site, the wind speed features there should therefore be known thoroughly as well as local demand or transmission capacity. The main objective of this paper is to utilise monthly average wind speed data collected from one prospective site within the African Great Lakes Region to demonstrate that the available wind power there is high enough to generate electricity. The mean monthly values were calculated from records gathered on hourly basis for a period of 5 years (2001 to 2005) from a site in Tanzania. The documentations that were collected at a height of 2 m were projected to a height of 50 m which is the standard hub height of wind turbines. The overall monthly average wind speed was found to be 12.11 m/s whereas June to November was established to be the windy season as the wind speed during the session is above the overall monthly wind speed. The available wind power density corresponding to the overall mean monthly wind speed was evaluated to be 1072 W/m2, a potential that is worthwhile harvesting for the purpose of electric generation.Keywords: Hydro power, windy season, available wind powerdensity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632124 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller
Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian
Abstract:
The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.
Keywords: Air flow, biomass combustion, feedback control system, fuel feeding, ladder logic, programmable logic controller, temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 585123 Efficacy of Selected Mobility Exercises and Participation in Special Games on Psychomotor Abilities, Functional Abilities and Game Performance among Intellectually Disabled Children of Under 14 Age
Authors: J. Samuel Jesudoss
Abstract:
The purpose of the study was to find out the efficacy of selected mobility exercises and participation in special games on psychomotor abilities, functional abilities and skill performance among intellectually disabled children of age group under 14. Thirty male students who were studying in Balar Kalvi Nilayam and YMCA College Special School, Chennai, acted as subjects for the study. They were only mild and moderate in intellectual disability. These students did not undergo any special training or coaching programme apart from their regular routine physical activity classes as a part of the curriculum in the school. They were attached at random, based on age in which 30 belonged to under 14 age group, which was divided into three equal group of ten for each experimental treatment. 10 students (Treatment group I) underwent calisthenics and special games participation, 10 students (Treatment group II) underwent aquatics and special games participation, 10 students (Treatment group III) underwent yoga and special games participation. The subjects were tested on selected criterion variables prior (pre test) and after twelve weeks of training (post test). The pre and post test data collected from three groups on functional abilities(self care, learning, capacity for independent living), psychomotor variables(static balance, eye hand coordination, simple reaction time test) and skill performance (bocce skill, badminton skill, table tennis skill) were statistically examined for significant difference, by applying the analysis ANACOVA. Whenever an 'F' ratio for adjusted test was found to be significant for adjusted post test means, Scheffe-s test was followed as a post-hoc test to determine which of the paired mean differences was significant. The result of the study showed that among under 14 age groups there was a significant improvement on selected criterion variables such as, Balance, Coordination, self-care and learning and also in Bocce, Badminton & Table Tennis skill performance, due to mobility exercises and participation in special games. However there were no significant differences among the groups.Keywords: Functional ability, intellectually disabled, Mobility exercises, Psychomotor ability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973122 Online Think–Pair–Share in a Third-Age ICT Course
Authors: Daniele Traversaro
Abstract:
Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have Information and Communication Technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. Research Question: Is collaborative learning suitable and effective, in terms of student engagement and learning outcomes, in an online ICT course for the elderly? Methods: In the TPS strategy a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Groups achieve better results than individual students (with scores greater than one order of magnitude) and most students found TPS helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is suitable for an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our work has several limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as future direction.
Keywords: Collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 636121 Developing Improvements to Multi-Hazard Risk Assessments
Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson
Abstract:
This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.
Keywords: Cascading hazards, multi-hazard, risk assessment, risk reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1094120 Ingenious Eco-Technology for Transforming Food and Tanneries Waste into a Soil Bio-Conditioner and Fertilizer Product Used for Recovery and Enhancement of the Productive Capacity of the Soil
Authors: Petre Voicu, Mircea Oaida, Radu Vasiu, Catalin Gheorghiu, Aurel Dumitru
Abstract:
The present work deals with the way in which food and tobacco waste can be used in agriculture. As a result of the lack of efficient technologies for their recycling, we are currently faced with the appearance of appreciable quantities of residual organic residues that find their use only very rarely and only after long storage in landfills. The main disadvantages of long storage of organic waste are the unpleasant smell, the high content of pathogenic agents, and the high content in the water. The release of these enormous amounts imperatively demands the finding of solutions to ensure the avoidance of environmental pollution. The measure practiced by us and presented in this paper consists of the processing of this waste in special installations, testing in pilot experimental perimeters, and later administration on agricultural lands without harming the quality of the soil, agricultural crops, and the environment. The current crisis of raw materials and energy also raises special problems in the field of organic waste valorization, an activity that takes place with low energy consumption. At the same time, their composition recommends them as useful secondary sources in agriculture. The transformation of food scraps and other residues concentrated organics thus acquires a new orientation, in which these materials are seen as important secondary resources. The utilization of food and tobacco waste in agriculture is also stimulated by the increasing lack of chemical fertilizers and the continuous increase in their price, under the conditions that the soil requires increased amounts of fertilizers in order to obtain high, stable, and profitable production. The need to maintain and increase the humus content of the soil is also taken into account, as an essential factor of its fertility, as a source and reserve of nutrients and microelements, as an important factor in increasing the buffering capacity of the soil, and the more reserved use of chemical fertilizers, improving the structure and permeability for water with positive effects on the quality of agricultural works and preventing the excess and/or deficit of moisture in the soil.
Keywords: Organic residue, food and tannery waste, fertilizer, soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 176119 Low Energy Technology for Leachate Valorisation
Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo
Abstract:
Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.
Keywords: Forward osmosis, landfills, leachate valorization, solar evaporation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955118 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: N. Mirzaei Varzeghani, M. Saffarzadeh, A. Naderan, A. Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, more passengers aged 55 and older using this airport, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.
Keywords: Multimodal transportation, travel behavior, demand modeling, statistical models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 532117 Analysis of Trend and Variability of Rainfall in the Mid-Mahanadi River Basin of Eastern India
Authors: Rabindra K. Panda, Gurjeet Singh
Abstract:
The major objective of this study was to analyze the trend and variability of rainfall in the middle Mahandi river basin located in eastern India. The trend of variation of extreme rainfall events has predominant effect on agricultural water management and extreme hydrological events such as floods and droughts. Mahanadi river basin is one of the major river basins of India having an area of 1,41,589 km2 and divided into three regions: Upper, middle and delta region. The middle region of Mahanadi river basin has an area of 48,700 km2 and it is mostly dominated by agricultural land, where agriculture is mostly rainfed. The study region has five Agro-climatic zones namely: East and South Eastern Coastal Plain, North Eastern Ghat, Western Undulating Zone, Western Central Table Land and Mid Central Table Land, which were numbered as zones 1 to 5 respectively for convenience in reporting. In the present study, analysis of variability and trends of annual, seasonal, and monthly rainfall was carried out, using the daily rainfall data collected from the Indian Meteorological Department (IMD) for 35 years (1979-2013) for the 5 agro-climatic zones. The long term variability of rainfall was investigated by evaluating the mean, standard deviation and coefficient of variation. The long term trend of rainfall was analyzed using the Mann-Kendall test on monthly, seasonal and annual time scales. It was found that there is a decreasing trend in the rainfall during the winter and pre monsoon seasons for zones 2, 3 and 4; whereas in the monsoon (rainy) season there is an increasing trend for zones 1, 4 and 5 with a level of significance ranging between 90-95%. On the other hand, the mean annual rainfall has an increasing trend at 99% significance level. The estimated seasonality index showed that the rainfall distribution is asymmetric and distributed over 3-4 months period. The study will help to understand the spatio-temporal variation of rainfall and to determine the correlation between the current rainfall trend and climate change scenario of the study region for multifarious use.
Keywords: Eastern India, long-term variability and trends, Mann-Kendall test, seasonality index, spatio-temporal variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634116 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators
Authors: Radwa Mabrook
Abstract:
Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.
Keywords: Collaborative culture, content creation, experimental culture, virtual reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 788115 Financial Regulations in the Process of Global Financial Crisis and Macroeconomics Impact of Basel III
Authors: M. Okan Tasar
Abstract:
Basel III (or the Third Basel Accord) is a global regulatory standard on bank capital adequacy, stress testing and market liquidity risk agreed upon by the members of the Basel Committee on Banking Supervision in 2010-2011, and scheduled to be introduced from 2013 until 2018. Basel III is a comprehensive set of reform measures. These measures aim to; (1) improve the banking sector-s ability to absorb shocks arising from financial and economic stress, whatever the source, (2) improve risk management and governance, (3) strengthen banks- transparency and disclosures. Similarly the reform target; (1) bank level or micro-prudential, regulation, which will help raise the resilience of individual banking institutions to periods of stress. (2) Macro-prudential regulations, system wide risk that can build up across the banking sector as well as the pro-cyclical implication of these risks over time. These two approaches to supervision are complementary as greater resilience at the individual bank level reduces the risk system wide shocks. Macroeconomic impact of Basel III; OECD estimates that the medium-term impact of Basel III implementation on GDP growth is in the range -0,05 percent to -0,15 percent per year. On the other hand economic output is mainly affected by an increase in bank lending spreads as banks pass a rise in banking funding costs, due to higher capital requirements, to their customers. Consequently the estimated effects on GDP growth assume no active response from monetary policy. Basel III impact on economic output could be offset by a reduction (or delayed increase) in monetary policy rates by about 30 to 80 basis points. The aim of this paper is to create a framework based on the recent regulations in order to prevent financial crises. Thus the need to overcome the global financial crisis will contribute to financial crises that may occur in the future periods. In the first part of the paper, the effects of the global crisis on the banking system examine the concept of financial regulations. In the second part; especially in the financial regulations and Basel III are analyzed. The last section in this paper explored the possible consequences of the macroeconomic impacts of Basel III.Keywords: Banking Systems, Basel III, Financial regulation, Global Financial Crisis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2287114 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Controlled Release of Doxorubicin
Authors: Parisa Shirzadeh
Abstract:
Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, natural and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer method. graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of CS, the amino reaction was performed to form amide transplantation, and the DOX was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX were characterized by FT-IR and TGA to recognize new functional groups which show the new bonding of CS to GO, RAMA and SEM to recognize size of layers that show changing in size and number of layers. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.
Keywords: Graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 233113 Identifying Temporary Housing Main Vertexes through Assessing Post-Disaster Recovery Programs
Authors: S. M. Amin Hosseini, Oriol Pons, Carmen Mendoza Arroyo, Albert de la Fuente
Abstract:
In the aftermath of a natural disaster, the major challenge most cities and societies face, regardless of their diverse level of prosperity, is to provide temporary housing (TH) for the displaced population (DP). However, the features of TH, which have been applied in previous recovery programs, greatly varied from case to case. This situation demonstrates that providing temporary accommodation for DP in a short period time and usually in great numbers is complicated in terms of satisfying all the beneficiaries’ needs, regardless of the societies’ welfare levels. Furthermore, when previously used strategies are applied to different areas, the chosen strategies are most likely destined to fail, unless the strategies are context and culturally based. Therefore, as the population of disaster-prone cities are increasing, decision-makers need a platform to help to determine all the factors, which caused the outcomes of the prior programs. To this end, this paper aims to assess the problems, requirements, limitations, potential responses, chosen strategies, and their outcomes, in order to determine the main elements that have influenced the TH process. In this regard, and in order to determine a customizable strategy, this study analyses the TH programs of five different cases as: Marmara earthquake, 1999; Bam earthquake, 2003; Aceh earthquake and tsunami, 2004; Hurricane Katrina, 2005; and, L’Aquila earthquake, 2009. The research results demonstrate that the main vertexes of TH are: (1) local characteristics, including local potential and affected population features, (2) TH properties, which needs to be considered in four phases: planning, provision/construction, operation, and second life, and (3) natural hazards impacts, which embraces intensity and type. Accordingly, this study offers decision-makers the opportunity to discover the main vertexes, their subsets, interactions, and the relation between strategies and outcomes based on the local conditions of each case. Consequently, authorities may acquire the capability to design a customizable method in the face of complicated post-disaster housing in the wake of future natural disasters.
Keywords: Post-disaster temporary accommodation, urban resilience, natural disaster, local characteristic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537112 Switching Studies on Ge15In5Te56Ag24 Thin Films
Authors: Diptoshi Roy, G. Sreevidya Varma, S. Asokan, Chandasree Das
Abstract:
Germanium Telluride based quaternary thin film switching devices with composition Ge15In5Te56Ag24, have been deposited in sandwich geometry on glass substrate with aluminum as top and bottom electrodes. The bulk glassy form of the said composition is prepared by melt quenching technique. In this technique, appropriate quantity of elements with high purity are taken in a quartz ampoule and sealed under a vacuum of 10-5 mbar. Then, it is allowed to rotate in a horizontal rotary furnace for 36 hours to ensure homogeneity of the melt. After that, the ampoule is quenched into a mixture of ice - water and NaOH to get the bulk ingot of the sample. The sample is then coated on a glass substrate using flash evaporation technique at a vacuum level of 10-6 mbar. The XRD report reveals the amorphous nature of the thin film sample and Energy - Dispersive X-ray Analysis (EDAX) confirms that the film retains the same chemical composition as that of the base sample. Electrical switching behavior of the device is studied with the help of Keithley (2410c) source-measure unit interfaced with Lab VIEW 7 (National Instruments). Switching studies, mainly SET (changing the state of the material from amorphous to crystalline) operation is conducted on the thin film form of the sample. This device is found to manifest memory switching as the device remains 'ON' even after the removal of the electric field. Also it is found that amorphous Ge15In5Te56Ag24 thin film unveils clean memory type of electrical switching behavior which can be justified by the absence of fluctuation in the I-V characteristics. The I-V characteristic also reveals that the switching is faster in this sample as no data points could be seen in the negative resistance region during the transition to on state and this leads to the conclusion of fast phase change during SET process. Scanning Electron Microscopy (SEM) studies are performed on the chosen sample to study the structural changes at the time of switching. SEM studies on the switched Ge15In5Te56Ag24 sample has shown some morphological changes at the place of switching wherein it can be explained that a conducting crystalline channel is formed in the device when the device switches from high resistance to low resistance state. From these studies it can be concluded that the material may find its application in fast switching Non-Volatile Phase Change Memory (PCM) Devices.
Keywords: Chalcogenides, vapor deposition, electrical switching, PCM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685111 Retrieval Augmented Generation against the Machine: Merging Human Cyber Security Expertise with Generative AI
Authors: Brennan Lodge
Abstract:
Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLMs is exciting, such models do have their downsides. LLMs cannot easily expand or revise their memory, and they cannot straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.
Keywords: Retrieval Augmented Generation, Governance Risk and Compliance, Cybersecurity, AI-driven Compliance, Risk Management, Generative AI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 124110 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: Dehumidification, nodal calculation, radiant cooling panel, system sizing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732109 The Impact of Supply Chain Strategy and Integration on Supply Chain Performance: Supply Chain Vulnerability as a Moderator
Authors: Yi-Chun Kuo, Jo-Chieh Lin
Abstract:
The objective of a supply chain strategy is to reduce waste and increase efficiency to attain cost benefits, and to guarantee supply chain flexibility when facing the ever-changing market environment in order to meet customer requirements. Strategy implementation aims to fulfill common goals and attain benefits by integrating upstream and downstream enterprises, sharing information, conducting common planning, and taking part in decision making, so as to enhance the overall performance of the supply chain. With the rise of outsourcing and globalization, the increasing dependence on suppliers and customers and the rapid development of information technology, the complexity and uncertainty of the supply chain have intensified, and supply chain vulnerability has surged, resulting in adverse effects on supply chain performance. Thus, this study aims to use supply chain vulnerability as a moderating variable and apply structural equation modeling (SEM) to determine the relationships among supply chain strategy, supply chain integration, and supply chain performance, as well as the moderating effect of supply chain vulnerability on supply chain performance. The data investigation of this study was questionnaires which were collected from the management level of enterprises in Taiwan and China, 149 questionnaires were received. The result of confirmatory factor analysis shows that the path coefficients of supply chain strategy on supply chain integration and supply chain performance are positive (0.497, t= 4.914; 0.748, t= 5.919), having a significantly positive effect. Supply chain integration is also significantly positively correlated to supply chain performance (0.192, t = 2.273). The moderating effects of supply chain vulnerability on supply chain strategy and supply chain integration to supply chain performance are significant (7.407; 4.687). In Taiwan, 97.73% of enterprises are small- and medium-sized enterprises (SMEs) focusing on receiving original equipment manufacturer (OEM) and original design manufacturer (ODM) orders. In order to meet the needs of customers and to respond to market changes, these enterprises especially focus on supply chain flexibility and their integration with the upstream and downstream enterprises. According to the observation of this research, the effect of supply chain vulnerability on supply chain performance is significant, and so enterprises need to attach great importance to the management of supply chain risk and conduct risk analysis on their suppliers in order to formulate response strategies when facing emergency situations. At the same time, risk management is incorporated into the supply chain so as to reduce the effect of supply chain vulnerability on the overall supply chain performance.
Keywords: Supply chain integration, supply chain performance, supply chain vulnerability, structural equation modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901108 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment
Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu
Abstract:
Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.
Keywords: Dissolvable magnesium, coating, plasma electrolytic oxide, sealer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 580107 The Mechanism Underlying Empathy-Related Helping Behavior: An Investigation of Empathy-Attitude- Action Model
Authors: Wan-Ting Liao, Angela K. Tzeng
Abstract:
Empathy has been an important issue in psychology, education, as well as cognitive neuroscience. Empathy has two major components: cognitive and emotional. Cognitive component refers to the ability to understand others’ perspectives, thoughts, and actions, whereas emotional component refers to understand how others feel. Empathy can be induced, attitude can then be changed, and with enough attitude change, helping behavior can occur. This finding leads us to two questions: is attitude change really necessary for prosocial behavior? And, what roles cognitive and affective empathy play? For the second question, participants with different psychopathic personality (PP) traits are critical because high PP people were found to suffer only affective empathy deficit. Their cognitive empathy shows no significant difference from the control group. 132 college students voluntarily participated in the current three-stage study. Stage 1 was to collect basic information including Interpersonal Reactivity Index (IRI), Psychopathic Personality Inventory-Revised (PPI-R), Attitude Scale, Visual Analogue Scale (VAS), and demographic data. Stage two was for empathy induction with three controversial scenarios, namely domestic violence, depression with a suicide attempt, and an ex-offender. Participants read all three stories and then rewrite the stories by one of two perspectives (empathetic vs. objective). They would then complete the VAS and Attitude Scale one more time for their post-attitude and emotional status. Three IVs were introduced for data analysis: PP (High vs. Low), Responsibility (whether or not the character is responsible for what happened), and Perspective-taking (Empathic vs. Objective). Stage 3 was for the action. Participants were instructed to freely use the 17 tokens they received as donations. They were debriefed and interviewed at the end of the experiment. The major findings were people with higher empathy tend to take more action in helping. Attitude change is not necessary for prosocial behavior. The controversy of the scenarios and how familiar participants are towards target groups play very important roles. Finally, people with high PP tend to show more public prosocial behavior due to their affective empathy deficit. Pre-existing value and belief as well as recent dramatic social events seem to have a big impact and possibly reduce the effect of the independent variables (IV) in our paradigm.
Keywords: Affective empathy, attitude, cognitive empathy, prosocial behavior, psychopathic traits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713106 Reconsidering the Palaeo-Environmental Reconstruction of the Wet Zone of Sri Lanka: A Zooarchaeological Perspective
Authors: Kalangi Rodrigo, Kelum Manamendra-Arachchi
Abstract:
Bones, teeth, and shells have been acknowledged over the last two centuries as evidence of chronology, Palaeo-environment, and human activity. Faunal traces are valid evidence of past situations because they have properties that have not changed over long periods. Sri Lanka has been known as an Island, which has a diverse variety of prehistoric occupation among ecological zones. Defining the Paleoecology of the past societies has been an archaeological thought developed in the 1960s. It is mainly concerned with the reconstruction from available geological and biological evidence of past biota, populations, communities, landscapes, environments, and ecosystems. This early and persistent human fossil, technical, and cultural florescence, as well as a collection of well-preserved tropical-forest rock shelters with associated 'on-site ' Palaeoenvironmental records, makes Sri Lanka a central and unusual case study to determine the extent and strength of early human tropical forest encounters. Excavations carried out in prehistoric caves in the low country wet zone has shown that in the last 50,000 years, the temperature in the lowland rainforests has not exceeded 5 degrees. Based on Semnopithecus Priam (Gray Langur) remains unearthed from wet zone prehistoric caves, it has been argued periods of momentous climate changes during the Last Glacial Maximum (LGM) and Terminal Pleistocene/Early Holocene boundary, with a recognizable preference for semi-open ‘Intermediate’ rainforest or edges. Continuous genus Acavus and Oligospira occupation along with uninterrupted horizontal pervasive of Canarium sp. (‘kekuna’ nut) have proven that temperatures in the lowland rain forests have not changed by at least 5 °C over the last 50,000 years. Site catchment or territorial analysis cannot be any longer defensible, due to time-distance based factors as well as optimal foraging theory failed as a consequence of prehistoric people were aware of the decrease in cost-benefit ratio and located sites, and generally played out a settlement strategy that minimized the ratio of energy expended to energy produced.Keywords: Palaeo-environment, palaeo-ecology, palaeo-climate, prehistory, zooarchaeology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738105 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910104 Urban Accessibility of Historical Cities: The Venetian Case Study
Authors: Valeria Tatano, Francesca Guidolin, Francesca Peltrera
Abstract:
The preservation of historical Italian heritage, at the urban and architectural scale, has to consider restrictions and requirements connected with conservation issues and usability needs, which are often at odds with historical heritage preservation. Recent decades have been marked by the search for increased accessibility not only of public and private buildings, but to the whole historical city, also for people with disability. Moreover, in the last years the concepts of Smart City and Healthy City seek to improve accessibility both in terms of mobility (independent or assisted) and fruition of goods and services, also for historical cities. The principles of Inclusive Design have introduced new criteria for the improvement of public urban space, between current regulations and best practices. Moreover, they have contributed to transforming “special needs” into an opportunity of social innovation. These considerations find a field of research and analysis in the historical city of Venice, which is at the same time a site of UNESCO world heritage, a mass tourism destination bringing in visitors from all over the world and a city inhabited by an aging population. Due to its conformation, Venetian urban fabric is only partially accessible: about four thousand bridges divide thousands of islands, making it almost impossible to move independently. These urban characteristics and difficulties were the base, in the last 20 years, for several researches, experimentations and solutions with the aim of eliminating architectural barriers, in particular for the usability of bridges. The Venetian Municipality with the EBA Office and some external consultants realized several devices (e.g. the “stepped ramp” and the new accessible ramps for the Venice Marathon) that should determine an innovation for the city, passing from the use of mechanical replicable devices to specific architectural projects in order to guarantee autonomy in use. This paper intends to present the state-of-the-art in bridges accessibility, through an analysis based on Inclusive Design principles and on the current national and regional regulation. The purpose is to evaluate some possible strategies that could improve performances, between limits and possibilities of interventions. The aim of the research is to lay the foundations for the development of a strategic program for the City of Venice that could successfully bring together both conservation and improvement requirements.
Keywords: Accessibility and inclusive design, historical heritage preservation, technological and social innovation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382103 Records of Lepidopteron Borers (Lepidoptera) on Stored Seeds of Indian Himalayan Conifers
Authors: Pawan Kumar, Pitamber Singh Negi
Abstract:
Many of the regeneration failures in conifers are often being attributed to heavy insect attack and pathogens during the period of seed formation and under storage conditions. Conifer berries and seed insects occur throughout the known range of the hosts and also limit the production of seed for nursery stock. On occasion, even entire seed crops are lost due to insect attacks. The berry and seeds of both the species have been found to be infected with insects. Recently, heavy damage to the berry and seeds of Juniper and Chilgoza Pine was observed in the field as well as in stored conditions, leading to reduction in the viability of seeds to germinate. Both the species are under great threat and regeneration of the species is very low. Due to lack of adequate literature, the study on the damage potential of seed insects was urgently required to know the exact status of the insect-pests attacking seeds/berries of both the pine species so as to develop pest management practices against the insect pests attack. As both the species are also under threat and are fighting for survival, so the study is important to develop management practices for the insect-pests of seeds/berries of Juniper and Chilgoza pine so as to evaluate in the nursery, as these species form major vegetation of their distribution zones. A six-year study on the management of insect pests of seeds of Chilgoza revealed that seeds of this species are prone to insect pests mainly borers. During present investigations, it was recorded that cones of are heavily attacked only by Dioryctria abietella (Lepidoptera: Pyralidae) in natural conditions, but seeds which are economically important are heavily infected, (sometimes up to 100% damage was also recorded) by insect borer, Plodia interpunctella (Lepidoptera: Pyralidae) and is recorded for the first time ‘to author’s best knowledge’ infesting the stored Chilgoza seeds. Similarly, Juniper berries and seeds were heavily attacked only by a single borer, Homaloxestis cholopis (Lepidoptera: Lecithoceridae) recorded as a new report in natural habitat as well as in stored conditions. During the present investigation details of insect pest attack on Juniper and Chilgoza pine seeds and berries was observed and suitable management practices were also developed to contain the insect-pests attack.
Keywords: Borer, conifer, cones, chilgoza pine, lepidoptera, juniper, management, seed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 867102 Development of a Miniature and Low-Cost IoT-Based Remote Health Monitoring Device
Authors: Sreejith Jayachandran, Mojtaba Ghodsi, Morteza Mohammadzaheri
Abstract:
The modern busy world is running behind new embedded technologies based on computers and software meanwhile some people are unable to monitor their health condition and regular medical check-ups. Some of them postpone medical check-ups due to a lack of time and convenience while others skip these regular evaluations and medical examinations due to huge medical bills and hospital expenses. In this research, we present a device in the telemonitoring system capable of monitoring, checking, and evaluating the health status of the human body remotely through the internet for the needs of all kinds of people. The remote health monitoring device is a microcontroller-based embedded unit. The various types of sensors in this device are connected to the human body, and with the help of an Arduino UNO board, the required analogue data are collected from the sensors. The microcontroller on the Arduino board processes the analogue data collected in this way into digital data and transfers that information to the cloud and stores it there; the processed digital data are then instantly displayed through the LCD attached to the machine. By accessing the cloud storage with a username and password, the concerned person’s health care teams/doctors, and other health staff can collect these data for the assessment and follow-up of that patient. Besides that, the family members/guardians can use and evaluate these data for awareness of the patient's current health status. Moreover, the system is connected to a GPS module. In emergencies, the concerned team can be positioning the patient or the person with this device. The setup continuously evaluates and transfers the data to the cloud and also the user can prefix a normal value range for the evaluation. For example, the blood pressure normal value is universally prefixed between 80/120 mmHg. Similarly, the Remote Health Monitoring System (RHMS) is also allowed to fix the range of values referred to as normal coefficients. This IoT-based miniature system 11×10×10 cm3 with a low weight of 500 gr only consumes 10 mW. This smart monitoring system is manufactured for 100 GBP (British Pound Sterling), and can facilitate the communication between patients and health systems, but also it can be employed for numerous other uses including communication sectors in the aerospace and transportation systems.
Keywords: Embedded Technology, Telemonitoring system, Microcontroller, Arduino UNO, Cloud storage, GPS, RHMS, Remote Health Monitoring System, Alert system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 261101 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: Optimal control, ensemble Kalman Filter, topography reconstruction, data assimilation, shallow water equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679100 Bone Mineral Density and Trabecular Bone Score in Ukrainian Women with Obesity
Authors: Vladyslav Povoroznyuk, Nataliia Dzerovych, Larysa Martynyuk, Tetiana Kovtun
Abstract:
Obesity and osteoporosis are the two diseases whose increasing prevalence and high impact on the global morbidity and mortality, during the two recent decades, have gained a status of major health threats worldwide. Obesity purports to affect the bone metabolism through complex mechanisms. Debated data on the connection between the bone mineral density and fracture prevalence in the obese patients are widely presented in literature. There is evidence that the correlation of weight and fracture risk is sitespecific. This study is aimed at determining the connection between the bone mineral density (BMD) and trabecular bone score (TBS) parameters in Ukrainian women suffering from obesity. We examined 1025 40-89-year-old women, divided them into the groups according to their body mass index: Group A included 360 women with obesity whose BMI was ≥30 kg/m2, and Group B – 665 women with no obesity and BMI of <30 kg/m2. The BMD of total body, lumbar spine at the site L1-L4, femur and forearm were measured by DXA (Prodigy, GEHC Lunar, Madison, WI, USA). The TBS of L1- L4 was assessed by means of TBS iNsight® software installed on our DXA machine (product of Med-Imaps, Pessac, France). In general, obese women had a significantly higher BMD of lumbar spine, femoral neck, proximal femur, total body and ultradistal forearm (p<0.001) in comparison with women without obesity. The TBS of L1-L4 was significantly lower in obese women compared to nonobese women (p<0.001). The BMD of lumbar spine, femoral neck and total body differed to a significant extent in women of 40-49, 50- 59, 60-69 and 70-79 years (p<0.05). At same time, in women aged 80-89 years the BMD of lumbar spine (p=0.09), femoral neck (p=0.22) and total body (p=0.06) barely differed. The BMD of ultradistal forearm was significantly higher in women of all age groups (p<0.05). The TBS of L1-L4 in all the age groups tended to reveal the lower parameters in obese women compared with the nonobese; however, those data were not statistically significant. By contrast, a significant positive correlation was observed between the fat mass and the BMD at different sites. The correlation between the fat mass and TBS of L1-L4 was also significant, although negative. Women with vertebral fractures had a significantly lower body weight, body mass index and total body fat mass in comparison with women without vertebral fractures in their anamnesis. In obese women the frequency of vertebral fractures was 27%, while in women without obesity – 57%.Keywords: Bone mineral density, trabecular bone score, obesity, women.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169099 International Financial Crises and the Political Economy of Financial Reforms in Turkey: 1994-2009
Authors: Birgül Şakar
Abstract:
This study1 holds for the formation of international financial crisis and political factors for economic crisis in Turkey, are evaluated in chronological order. The international arena and relevant studies conducted in Turkey work in the literature are assessed. The main purpose of the study is to hold the linkage between the crises and political stability in Turkey in details, and to examine the position of Turkey in this regard. The introduction part follows the literature survey on the models explaining causes and results of the crises, the second part of the study. In the third part, the formations of the world financial crises are studied. The fourth part, financial crisis in Turkey in 1994, 2000, 2001 and 2008 are reviewed and their political reasons are analyzed. In the last part of the study the results and recommendations are held. Political administrations have laid the grounds for an economic crisis in Turkey. In this study, the emergence of an economic crisis in Turkey and the developments after the crisis are chronologically examined and an explanation is offered as to the cause and effect relationship between the political administration and economic equilibrium in the country. Economic crises can be characterized as follows: high prices of consumables, high interest rates, current account deficits, budget deficits, structural defects in government finance, rising inflation and fixed currency applications, rising government debt, declining savings rates and increased dependency on foreign capital stock. Entering into the conditions of crisis during a time when the exchange value of the country-s national currency was rising, speculative finance movements and shrinking of foreign currency reserves happened due to expectations for devaluation and because of foreign investors- resistance to financing national debt, and a financial risk occurs. During the February 2001 crisis and immediately following, devaluation and reduction of value occurred in Turkey-s stock market. While changing over to the system of floating exchange rates in the midst of this crisis, the effects of the crisis on the real economy are discussed in this study. Administered politics include financial reforms, such as the rearrangement of banking systems. These reforms followed with the provision of foreign financial support. There have been winners and losers in the imbalance of income distribution, which has recently become more evident in Turkey-s fragile economy.
Keywords: Economics, marketing crisis, financial reforms, political economy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146698 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction
Authors: M. D. Haneef, R. B. Randall, Z. Peng
Abstract:
Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time, and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration-based analysis and wear prediction. In present study, a simulation model was developed to investigate the bearing wear behaviour, resulting because of different operating conditions, to complement the vibration analysis. In current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. In addition, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journals and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 μm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behaviour and on the other hand it also helps to establish a co-relation between wear based and vibration based analysis. Therefore, the model provides a cost effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.Keywords: Condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299