Search results for: input mode
546 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting
Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade
Abstract:
The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit
Procedia PDF Downloads 167545 Urogenital Myiasis in Pregnancy - A Rare Presentation
Authors: Madeleine Elder, Aye Htun
Abstract:
Background: Myiasis is the parasitic infestation of body tissues by fly larvae. It predominantly occurs in poor socioeconomic regions of tropical and subtropical countries where it is associated with poor hygiene and sanitation. Cutaneous and wound myiasis are the most common presentations whereas urogenital myiasis is rare, with few reported cases. Case: a 26-year-old primiparous woman with a low-risk pregnancy presented to the emergency department at 37+3-weeks’ gestation after passing a 2cm black larva during micturition, with 2 weeks of mild vulvar pruritus and dysuria. She had travelled to India 9-months prior. Examination of the external genitalia showed small white larvae over the vulva and anus and a mildly inflamed introitus. Speculum examination showed infiltration into the vagina and heavy white discharge. High vaginal swab reported Candida albicans. Urine microscopy reported bacteriuria with Enterobacter cloacae. Urine parasite examination showed myiasis caused by Clogmia albipunctata species of fly larvae from the family Psychodidae. Renal tract ultrasound and inflammatory markers were normal. Infectious diseases, urology and paediatric teams were consulted. The woman received treatment for her urinary tract infection (which was likely precipitated by bladder irritation from local parasite infestation) and vaginal candidiasis. She underwent daily physical removal of parasites with cleaning, speculum examination and removal, and hydration to promote bladder emptying. Due to the risk of neonatal exposure, aspiration pneumonitis and facial infestation, the woman was steroid covered and proceeded to have an elective caesarean section at 38+3-weeks’ gestation, with delivery of a healthy infant. She then proceeded to have a rigid cystoscopy and washout, which was unremarkable. Placenta histopathology revealed focal eosinophilia in keeping with the history of maternal parasites. Conclusion: Urogenital myiasis is very rare, especially in the developed world where it is seen in returned travellers. Treatment may include systemic therapy with ivermectin and physical removal of parasites. During pregnancy, physical removal is considered the safest treatment option, and discussion around the timing and mode of delivery should consider the risk of harm to the foetus.Keywords: urogenital myiasis, parasitic infection, infection in pregnancy, returned traveller
Procedia PDF Downloads 127544 Hydrogeochemical Investigation of Lead-Zinc Deposits in Oshiri and Ishiagu Areas, South Eastern Nigeria
Authors: Christian Ogubuchi Ede, Moses Oghenenyoreme Eyankware
Abstract:
This study assessed the concentration of heavy metals (HMs) in soil, rock, mine dump pile, and water from Oshiri and Ishiagu areas of Ebonyi State. Investigations on mobile fraction equally evaluated the geochemical condition of different HM using UV spectrophotometer for Mineralized and unmineralized rocks, dumps, and soil, while AAS was used in determining the geochemical nature of the water system. Analysis revealed very high pollution of Cd mostly in Ishiagu (Ihetutu and Amaonye) active mine zones and with subordinates enrichments of Pb, Cu, As, and Zn in Amagu and Umungbala. Oshiri recorded sparingly moderate to high contamination of Cd and Mn but out rightly high anthropogenic input. Observation showed that most of the contamination conditions were unbearable while at the control but decrease with increasing distance from the mine vicinity. The potential heavy metal risk of the environments was evaluated using the risk factors such as enrichment factor, index of Geoacumulation, Contamination Factor, and Effect Range Median. Cadmium and Zn showed moderate to extreme contamination using Geoaccumulation Index (Igeo) while Pb, Cd, and As indicated moderate to strong pollution using the Effect Range Median. Results, when compared with the allowable limits and standards, showed the concentration of the metals in the following order Cd>Zn>Pb>As>Cu>Ni (rocks), Cd>As>Pb>Zn>Cu>Ni (soil) while Cd>Zn>As>Pb> Cu (for mine dump pile. High concentrations of Zn and As were recorded more in mine pond and salt line/drain channels along active mine zones, it heightened its threat during the rainy period as it settles into river course, living behind full-scale contaminations to inhabitants depending on it for domestic uses. Pb and Cu with moderate pollution were recorded in surface/stream water source as its mobility were relatively low. Results from Ishiagu Crush rock sites and Fedeco metallurgical and auto workshop where groundwater contamination was seen infiltrating some of the wells points gave rise to values that were 4 times high than the allowable limits. Some of these metal concentrations according to WHO (2015) if left unmitigated pose adverse effects to the soil and human community.Keywords: water, geo-accumulation, heavy metals, mine and Nigeria.
Procedia PDF Downloads 170543 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator
Authors: Gómez R. Marta, Martín M. Jesús María
Abstract:
The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.Keywords: atmospheric dispersion, dioxin, furan, incinerator
Procedia PDF Downloads 217542 Blended Cloud Based Learning Approach in Information Technology Skills Training and Paperless Assessment: Case Study of University of Cape Coast
Authors: David Ofosu-Hamilton, John K. E. Edumadze
Abstract:
Universities have come to recognize the role Information and Communication Technology (ICT) skills plays in the daily activities of tertiary students. The ability to use ICT – essentially, computers and their diverse applications – are important resources that influence an individual’s economic and social participation and human capital development. Our society now increasingly relies on the Internet, and the Cloud as a means to communicate and disseminate information. The educated individual should, therefore, be able to use ICT to create and share knowledge that will improve society. It is, therefore, important that universities require incoming students to demonstrate a level of computer proficiency or trained to do so at a minimal cost by deploying advanced educational technologies. The training and standardized assessment of all in-coming first-year students of the University of Cape Coast in Information Technology Skills (ITS) have become a necessity as students’ most often than not highly overestimate their digital skill and digital ignorance is costly to any economy. The one-semester course is targeted at fresh students and aimed at enhancing the productivity and software skills of students. In this respect, emphasis is placed on skills that will enable students to be proficient in using Microsoft Office and Google Apps for Education for their academic work and future professional work whiles using emerging digital multimedia technologies in a safe, ethical, responsible, and legal manner. The course is delivered in blended mode - online and self-paced (student centered) using Alison’s free cloud-based tutorial (Moodle) of Microsoft Office videos. Online support is provided via discussion forums on the University’s Moodle platform and tutor-directed and assisted at the ICT Centre and Google E-learning laboratory. All students are required to register for the ITS course during either the first or second semester of the first year and must participate and complete it within a semester. Assessment focuses on Alison online assessment on Microsoft Office, Alison online assessment on ALISON ABC IT, Peer assessment on e-portfolio created using Google Apps/Office 365 and an End of Semester’s online assessment at the ICT Centre whenever the student was ready in the cause of the semester. This paper, therefore, focuses on the digital culture approach of hybrid teaching, learning and paperless examinations and the possible adoption by other courses or programs at the University of Cape Coast.Keywords: assessment, blended, cloud, paperless
Procedia PDF Downloads 248541 An Investigation into Enablers and Barriers of Reverse Technology Transfer
Authors: Nirmal Kundu, Chandan Bhar, Visveswaran Pandurangan
Abstract:
Technology is the most valued possession for a country or an organization. The economic development depends not on stock of technology but on the capabilities how the technology is being exploited. The technology transfer is the best way how the developing countries have an access to state-of- the-art technology. Traditional technology transfer is a unidirectional phenomenon where technology is transferred from developed to developing countries. But now there is a change of wind. There is a general agreement that global shift of economic power is under way from west to east. As China and India are making the transition from users to producers, and producers to innovators, this has increasing important implications on economy, technology and policy of global trade. As a result, Reverse technology transfer has become a phenomenon and field of study in technology management. The term “Reverse Technology Transfer” is not well defined. Initially the concept of Reverse technology transfer was associated with the phenomenon of “Brain drain” from developing to developed countries. In the second phase, Reverse Technology Transfer was associated with the transfer of knowledge and technology from subsidiaries to multinationals. Finally, time has come now to extend the concept of reverse technology transfer to two different organizations or countries related or unrelated by traditional technology transfer but the transfer or has essentially received the technology through traditional mode of technology transfer. The objective of this paper is to study; 1) the present status of Reverse technology transfer, 2) the factors which are the enablers and barriers of Reverse technology transfer and 3) how the reverse technology transfer strategy can be integrated in the technology policy of a country which will give the countries an economic boost. The research methodology used in this study is a combination of literature review, case studies and key informant interviews. The literature review includes both published as well as unpublished sources of literature. In case study, attempt has been made to study the records of reverse technology transfer that have been occurred in developing countries. In case of key informant interviews, informal telephonic discussions have been carried out with the key executives of the organizations (industry, university and research institutions) who are actively engaged in the process of technology transfer- traditional as well as reverse. Reverse technology transfer is possible only by creating technological capabilities. Following four important enablers coupled with government active and aggressive action can help to build technology base to reach to the goal of Reverse technology transfer 1) Imitation to innovation, 2) Reverse engineering, 3) Collaborative R & D approach, and 4) Preventing reverse brain drain. The barriers that come in the way are the mindset of over dependence, over subordination and parent–child attitude (not adult attitude). Exploitation of these enablers and overcoming the barriers of reverse technology transfer, the developing countries like India and China can prove that going “reverse” is the best way to move forward and again establish themselves as leader of the future world.Keywords: barriers of reverse technology transfer, enablers of reverse technology transfer, knowledge transfer, reverse technology transfer, technology transfer
Procedia PDF Downloads 399540 Human’s Sensitive Reactions during Different Geomagnetic Activity: An Experimental Study in Natural and Simulated Conditions
Authors: Ketevan Janashia, Tamar Tsibadze, Levan Tvildiani, Nikoloz Invia, Elguja Kubaneishvili, Vasili Kukhianidze, George Ramishvili
Abstract:
This study considers the possible effects of geomagnetic activity (GMA) on humans situated on Earth by performing experiments concerning specific sensitive reactions in humans in both: natural conditions during different GMA and by the simulation of different GMA in the lab. The measurements of autonomic nervous system (ANS) responses to different GMA via measuring the heart rate variability (HRV) indices and stress index (SI) and their comparison with the K-index of GMA have been presented and discussed. The results of experiments indicate an intensification of the sympathetic part of the ANS as a stress reaction of the human organism when it is exposed to high level of GMA as natural as well as in simulated conditions. Aim: We tested the hypothesis whether the GMF when disturbed can have effects on human ANS causing specific sensitive stress-reactions depending on the initial type of ANS. Methods: The study focuses on the effects of different GMA on ANS by comparing of HRV indices and stress index (SI) of n= 78, 18-24 years old healthy male volunteers. Experiments were performed as natural conditions on days of low (K= 1-3) and high (K= 5-7) GMA as well as in the lab by the simulation of different GMA using the device of geomagnetic storm (GMS) compensation and simulation. Results: In comparison with days of low GMA (K=1-3) the initial values of HRV shifted towards the intensification of the sympathetic part (SP) of the ANS during days of GMSs (K=5-7) with statistical significance p-values: HR (heart rate, p= 0.001), SDNN (Standard deviation of all Normal to Normal intervals, p= 0.0001), RMSSD (The square root of the arithmetical mean of the sum of the squares of differences between adjacent NN intervals, p= 0.0001). In comparison with conditions during GMSs compensation mode (K= 0, B= 0-5nT), the ANS balance was observed to shift during exposure to simulated GMSs with intensities in the range of natural GMSs (K= 7, B= 200nT). However, the initial values of the ANS resulted in different dynamics in its variation depending of GMA level. In the case of initial balanced regulation type (HR > 80) significant intensification of SP was observed with p-values: HR (p= 0.0001), SDNN (p= 0.047), RMSSD (p= 0.28), LF/HF (p=0.03), SI (p= 0.02); while in the case of initial parasympathetic regulation type (HR < 80), an insignificant shift to the intensification of the parasympathetic part (PP) was observed. Conclusions: The results indicate an intensification of SP as a stress reaction of the human organism when it is exposed to high level of GMA in both natural and simulated conditions.Keywords: autonomic nervous system, device of magneto compensation/simulation, geomagnetic storms, heart rate variability
Procedia PDF Downloads 141539 Ionometallurgy for Recycling Silver in Silicon Solar Panel
Authors: Emmanuel Billy
Abstract:
This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver
Procedia PDF Downloads 247538 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries
Procedia PDF Downloads 618537 Basotho Cultural Shift: The Role of Dress in the Shift
Authors: Papali Elizabeth Maqalika
Abstract:
Introduction: Dress is used daily and can be used to define culture, and through it, individuals form a sense of self and identity. One of the characteristics of culture is that it evolves; Basotho culture is no exception to this. It has evolved through rites of entry, significant ceremonies, daily living, and an approach to others. Most of these affect and have been affected by the local/traditional dress. The study focused on the evolution of culture, and the role played by dress as it is one of the major contributors to non-verbal communication. Methodology: Secondary data were used since most of the original cultural practices are no longer held dear in the value system and so no longer practiced. Interviews were conducted to get some insights from the senior citizens and their responses compared to those of the present adults. Content analysis was used for the interview data. Results: The nature of governance in Lesotho has clearly contributed to the current cultural state of confusion. The Basotho culture has indeed shifted, and the difference in dress code explains it. Acculturation, the alteration in environments, and the type of occasions Basotho attended lately contributed to the shift. Technology brought about a difference in the mode of transport, sports, household activities, and gender roles. Conclusion and Recommendations: It was concluded that since culture is imparted through socialisation, a change in availability of most Basotho women leaves little time left for socialisation with children and resorts to other upbringing patterns, most of which are not cultural; this has brought a cultural shift. In addition, acculturation has contributed massively to the value system of Basotho. The type of dress worn by Basotho presently shifts the culture, and the shifting culture also shifts the dress required to suit the present culture. Because of the type of mindset Basotho has now, it is recommended that cultural days be observed in schools, including the multi-racial ones, and media should assist in this information transmission. The campaigns regarding the value of traditional dress and what it represents are recommended. The local dressmakers manufacturing the Seshoeshoe and any other traditional dress need to be educated about the fabric history, fiber content, and consequent care to be in a position to guide ultimate consumers of the products. Awareness campaigns that the culture shifts and may not necessarily result in negative should be ventured. Cultural exhibitions should also be held ideally at places that hold some cultural heritage. The ministry of sports and culture, together with that of tourism, should run with cultural awareness and enriching vision with a focus on education as opposed to revenue collection.Keywords: Basotho, culture, dress, acculturation, influence, cultural heritage, socialization, non-verbal communication, Seshoeshoe
Procedia PDF Downloads 76536 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 134535 Internationalization of Higher Education in Malaysia-Rationale for Global Citizens
Authors: Irma Wani Othman
Abstract:
The internationalization of higher education in Malaysia mainly focuses to place the implementation of the strategic, comprehensive and integrated range of stakeholders in order to highlight the visibility of Malaysia as a hub of academic excellence. While the concept of 'global citizenship' is used as a two-pronged strategy of aggressive marketing by universities which includes; (i) the involvement of the academic expatriates in stimulating international activities of higher education and (ii) an increase in international student enrollment capacity for the enculturation of science and the development of first class mentality. In this aspect, aspirations for a transnational social movement through global citizenship status to establish the identity of the university community without borders (borderless universities) - regardless of skin colour, thus rationalize and liberalize the universal principles of life and cultural traditions of a nation. The education system earlier referred by the spirit of nationalism is now progressing due to globalization, hence forming a system of higher education that is relevant and generated by the need of all time. However, debates arose when the involvement of global citizenship is said to threaten the ultimate university autonomy in determining the direction of academic affairs and governance of their human resources. Stemming from this debate, this study aims to explore the experience of 'global citizenship' that the academic expatriates and international students in shaping the university's strategic needs and interests which are in line with the transition of contemporary higher education. The objective of this study is to examine the acculturation experience of the global citizen in the form of transnational higher education system and suggest policy and policing IHE which refers directly to the experience of the global citizen. This study offers a detailed understanding of how the university communities assess their expatriation experience, thus becoming useful information for learning and transforming education. The findings also open an advanced perspective on the international mobility of human resources and the implications on the implementation of the policy of internationalization of higher education. The contribution of this study is expected to give new input, thus shift the focus of contextual literature for the internationalization of the education system. Instead of focusing on the purpose of generating income of a university, to a greater understanding of subjective experience in utilizing international human resources hence contributing to the prominent transnational character of higher education.Keywords: internationalization, global citizens, Malaysia higher education, academic expatriate, international students
Procedia PDF Downloads 313534 Electronic Six-Minute Walk Test (E-6MWT): Less Manpower, Higher Efficiency, and Better Data Management
Authors: C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, T. K. Chui, L. Y. Chan, K. W. Lee, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow
Abstract:
Six-minute walk test (6MWT) is a sub-maximal exercise test to assess aerobic capacity and exercise tolerance of patients with chronic respiratory disease and heart failure. This has been proven to be a reliable and valid tool and commonly used in clinical situations. Traditional 6MWT is labour-intensive and time-consuming especially for patients who require assistance in ambulation and oxygen use. When performing the test with these patients, one staff will assist the patient in walking (with or without aids) while another staff will need to manually record patient’s oxygen saturation, heart rate and walking distance at every minute and/or carry oxygen cylinder at the same time. Physiotherapist will then have to document the test results in bed notes in details. With the use of electronic 6MWT (E-6MWT), patients wear a wireless oximeter that transfers data to a tablet PC via Bluetooth. Real-time recording of oxygen saturation, heart rate, and distance are displayed. No manual work on recording is needed. The tablet will generate a comprehensive report which can be directly attached to the patient’s bed notes for documentation. Data can also be saved for later patient follow up. This study was carried out in North District Hospital. Patients who followed commands and required 6MWT assessment were included. Patients were assigned to study or control groups. In the study group, patients adopted the E-6MWT while those in control group adopted the traditional 6MWT. Manpower and time consumed were recorded. Physiotherapists also completed a questionnaire about the use of E-6MWT. Total 12 subjects (Study=6; Control=6) were recruited during 11-12/2017. An average number of staff required and time consumed in traditional 6MWT were 1.67 and 949.33 seconds respectively; while in E-6MWT, the figures were 1.00 and 630.00 seconds respectively. Compared to traditional 6MWT, E-6MWT required 67.00% less manpower and 50.10% less in time spent. Physiotherapists (n=7) found E-6MWT is convenient to use (mean=5.14; satisfied to very satisfied), requires less manpower and time to complete the test (mean=4.71; rather satisfied to satisfied), has better data management (mean=5.86; satisfied to very satisfied) and is recommended to be used clinically (mean=5.29; satisfied to very satisfied). It is proven that E-6MWT requires less manpower input with higher efficiency and better data management. It is welcomed by the clinical frontline staff.Keywords: electronic, physiotherapy, six-minute walk test, 6MWT
Procedia PDF Downloads 154533 Antenatal Monitoring of Pre-Eclampsia in a Low Resource Setting
Authors: Alina Rahim, Joanne Moffatt, Jessica Taylor, Joseph Hartland, Tamer Abdelrazik
Abstract:
Background: In 2011, 15% of maternal deaths in Uganda were due to hypertensive disorders (pre-eclampsia and eclampsia). The majority of these deaths are avoidable with optimum antenatal care. The aim of the study was to evaluate how antenatal monitoring of pre-eclampsia was carried out in a low resource setting and to identify barriers to best practice as recommended by the World Health Organisation (WHO) as part of a 4th year medical student External Student Selected component field trip. Method: Women admitted to hospital with pre-eclampsia in rural Uganda (Villa Maria and Kitovu Hospitals) over a year-long period were identified using the maternity register and antenatal record book. It was not possible to obtain notes for all cases identified on the maternity register. Therefore a total of thirty sets of notes were reviewed. The management was recorded and compared to Ugandan National Guidelines and WHO recommendations. Additional qualitative information on routine practice was established by interviewing staff members from the obstetric and midwifery teams. Results: From the records available, all patients in this sample were managed according to WHO recommendations during labour. The rate of Caesarean section as a mode of delivery was noted to be high in this group of patients; 56% at Villa Maria and 46% at Kitovu. Antenatally two WHO recommendations were not routinely met: aspirin prophylaxis and calcium supplementation. This was due to lack of resources, and lack of attendance at antenatal clinic leading to poor detection of high-risk patients. Medical management of pre-eclampsia varied between individual patients, overall 93.3% complied with Ugandan national guidelines. Two patients were treated with diuretics, which is against WHO guidance. Discussion: Antenatal monitoring of pre-eclampsia is important in reducing severe morbidity, long-term disability and mortality amongst mothers and their babies 2 . Poor attendance at antenatal clinic is a barrier to healthcare in low-income countries. Increasing awareness of the importance of these visits for women should be encouraged. The majority of cases reviewed in this sample of women were treated according to Ugandan National Guidelines. It is recommended to commence the use of aspirin prophylaxis for women at high-risk of developing pre-eclampsia and the creation of detailed guidelines for Uganda which would allow for standardisation of care county-wide.Keywords: antenatal monitoring, low resource setting, pre-eclampsia, Uganda
Procedia PDF Downloads 228532 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests
Authors: Md. Kausar Alam, Ramin Motamed
Abstract:
The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction
Procedia PDF Downloads 97531 Cooperation of Unmanned Vehicles for Accomplishing Missions
Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin
Abstract:
The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning
Procedia PDF Downloads 128530 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 151529 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen
Abstract:
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance
Procedia PDF Downloads 393528 Structural and Binding Studies of Peptidyl-tRNA Hydrolase from Pseudomonas aeruginosa Provide a Platform for the Structure Based Inhibitor Design against Peptidyl-tRNA Hydrolase
Authors: Sujata Sharma, Avinash Singh, Lovely Gautam, Pradeep Sharma, Mau Sinha, Asha Bhushan, Punit Kaur, Tej P. Singh
Abstract:
Peptidyl-tRNA hydrolase (Pth) Pth is an essential bacterial enzyme that catalyzes the release of free tRNA and peptide moeities from peptidyl tRNAs during stalling of protein synthesis. In order to design inhibitors of Pth from Pseudomonas aeruginosa (PaPth), we have determined the structures of PaPth in its native state and in the bound states with two compounds, amino acylate-tRNA analogue (AAtA) and 5-azacytidine (AZAC). The peptidyl-tRNA hydrolase gene from Pseudomonas aeruginosa was amplified by Phusion High-Fidelity DNA Polymerase using forward and reverse primers, respectively. The E. coliBL21 (λDE3) strain was used for expression of the recombinant peptidyl-tRNA hydrolase from Pseudomonas aeruginosa. The protein was purified using a Ni-NTA superflow column. The crystallization experiments were carried out using hanging drop vapour diffusion method. The crystals diffracted to 1.50 Å resolution. The data were processed using HKL-2000. The polypeptide chain of PaPth consists of 194 amino acid residues from Met1 to Ala194. The centrally located β-structure is surrounded by α-helices from all sides except the side that has entrance to the substrate binding site. The structures of the complexes of PaPth with AAtA and AZAC showed the ligands bound to PaPth in the substrate binding cleft and interacted with protein atoms extensively. The residues that formed intermolecular hydrogen bonds with the atoms of AAtA included Asn12, His22, Asn70, Gly113, Asn116, Ser148, and Glu161 of the symmetry related molecule. The amino acids that were involved in hydrogen bonded interactions in case of AZAC included, His22, Gly113, Asn116, and Ser148. As indicated by fittings of two ligands and the number of interactions made by them with protein atoms, AAtA appears to be a more compatible with the structure of the substrate binding cleft. However, there is a further scope to achieve a better stacking than that of O-tyrosyl moiety because it is not still ideally stacked. These observations about the interactions between the protein and ligands have provided the information about the mode of binding of ligands, nature and number of interactions. This information may be useful for the design of tight inhibitors of Pth enzymes.Keywords: peptidyl tRNA hydrolase, Acinetobacter baumannii, Pth enzymes, O-tyrosyl
Procedia PDF Downloads 430527 Adaptative Metabolism of Lactic Acid Bacteria during Brewers' Spent Grain Fermentation
Authors: M. Acin-Albiac, P. Filannino, R. Coda, Carlo G. Rizzello, M. Gobbetti, R. Di Cagno
Abstract:
Demand for smart management of large amounts of agro-food by-products has become an area of major environmental and economic importance worldwide. Brewers' spent grain (BSG), the most abundant by-product generated in the beer-brewing process, represents an example of valuable raw material and source of health-promoting compounds. To the date, the valorization of BSG as a food ingredient has been limited due to poor technological and sensory properties. Tailored bioprocessing through lactic acid bacteria (LAB) fermentation is a versatile and sustainable means for the exploitation of food industry by-products. Indigestible carbohydrates (e.g., hemicelluloses and celluloses), high phenolic content, and mostly lignin make of BSG a hostile environment for microbial survival. Hence, the selection of tailored starters is required for successful fermentation. Our study investigated the metabolic strategies of Leuconostoc pseudomesenteroides and Lactobacillus plantarum strains to exploit BSG as a food ingredient. Two distinctive BSG samples from different breweries (Italian IT- and Finish FL-BSG) were microbially and chemically characterized. Growth kinetics, organic acid profiles, and the evolution of phenolic profiles during the fermentation in two BSG model media were determined. The results were further complemented with gene expression targeting genes involved in the degradation cellulose, hemicelluloses building blocks, and the metabolism of anti-nutritional factors. Overall, the results were LAB genus dependent showing distinctive metabolic capabilities. Leuc. pseudomesenteroides DSM 20193 may degrade BSG xylans while sucrose metabolism could be furtherly exploited for extracellular polymeric substances (EPS) production to enhance BSG pro-technological properties. Although L. plantarum strains may follow the same metabolic strategies during BSG fermentation, the mode of action to pursue such strategies was strain-dependent. L. plantarum PU1 showed a great preference for β-galactans compared to strain WCFS1, while the preference for arabinose occurred at different metabolic phases. Phenolic compounds profiling highlighted a novel metabolic route for lignin metabolism. These findings will allow an improvement of understanding of how lactic acid bacteria transform BSG into economically valuable food ingredients.Keywords: brewery by-product valorization, metabolism of plant phenolics, metabolism of lactic acid bacteria, gene expression
Procedia PDF Downloads 129526 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business
Authors: Kritchakhris Na-Wattanaprasert
Abstract:
The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.Keywords: key performance indicator, warehouse management, warehouse operation, logistics management
Procedia PDF Downloads 431525 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India
Authors: Disha Bhanot, Vinish Kathuria
Abstract:
This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.Keywords: distress sale, horticulture, income loss, India, price uncertainity
Procedia PDF Downloads 243524 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language
Authors: Tengku Sepora Tengku Mahadi
Abstract:
Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture
Procedia PDF Downloads 149523 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 212522 The Mapping of Pastoral Area as a Basis of Ecological for Beef Cattle in Pinrang Regency, South Sulawesi, Indonesia
Authors: Jasmal A. Syamsu, Muhammad Yusuf, Hikmah M. Ali, Mawardi A. Asja, Zulkharnaim
Abstract:
This study was conducted and aimed in identifying and mapping the pasture as an ecological base of beef cattle. A survey was carried out during a period of April to June 2016, in Suppa, Mattirobulu, the district of Pinrang, South Sulawesi province. The mapping process of grazing area was conducted in several stages; inputting and tracking of data points into Google Earth Pro (version 7.1.4.1529), affirmation and confirmation of tracking line visualized by satellite with a variety of records at the point, a certain point and tracking input data into ArcMap Application (ArcGIS version 10.1), data processing DEM/SRTM (S04E119) with respect to the location of the grazing areas, creation of a contour map (a distance of 5 m) and mapping tilt (slope) of land and land cover map-making. Analysis of land cover, particularly the state of the vegetation was done through the identification procedure NDVI (Normalized Differences Vegetation Index). This procedure was performed by making use of the Landsat-8. The results showed that the topography of the grazing areas of hills and some sloping surfaces and flat with elevation vary from 74 to 145 above sea level (asl), while the requirements for growing superior grass and legume is an altitude of up to 143-159 asl. Slope varied between 0 - > 40% and was dominated by a slope of 0-15%, according to the slope/topography pasture maximum of 15%. The range of NDVI values for pasture image analysis results was between 0.1 and 0.27. Characteristics of vegetation cover of pasture land in the category of vegetation density were low, 70% of the land was the land for cattle grazing, while the remaining approximately 30% was a grove and forest included plant water where the place for shelter of the cattle during the heat and drinking water supply. There are seven types of graminae and 5 types of legume that was dominant in the region. Proportionally, graminae class dominated up 75.6% and legume crops up to 22.1% and the remaining 2.3% was another plant trees that grow in the region. The dominant weed species in the region were Cromolaenaodorata and Lantana camara, besides that there were 6 types of floor plant that did not include as forage fodder.Keywords: pastoral, ecology, mapping, beef cattle
Procedia PDF Downloads 353521 Stimulating Effects of Media in Improving Quality of Distance Education: A Literature Based Study
Authors: Tahzeeb Mahreen
Abstract:
Distance education refers to giving instruction in which students are remote from the institution and once in a while go to formal demonstration classes, and teaching sessions. Segments of media, for example, radio, TV, PC and Internet and so on are the assets and method for correspondence being utilized as a part of learning material by many open and distance learning institutions. Media has a great part in maximizing the learning opportunities thus enabling distance education, a mode of increased literacy rate of the country. This study goes for analyzing how media had affected distance education through its different mediums. The objectives of the study were (i) to determine the direct impact of media on distance education? (ii) To know how media effects distance education pedagogy (iii) To find out how media works to increase student’s achievement. Literature-based methodology was used, and books, peer-reviewed articles, press reports and internet-based materials were studied as a result. By using descriptive qualitative research analysis, the researcher has interpreted that distance education programs are progressively utilizing mixes of media to convey training that has a positive impact on learning along with a few challenges. In addition, the perception of the researcher varied depending on the programs of distance learning but generally believed that electronic media were moderately more supportive in enhancing the overall performance of the learners. It was concluded that the intellectual style, identity qualities, and self-expectations are the three primary enhanced areas in a student’s educational life in distance education programs. It was portrayed that a comprehension of how individual learners approach learning may make it workable for the distance educator to see an example of learning styles and arrange or modify course presentations through media. Moreover, it is noticed that teaching in distance education address the developing role of the instructor, the requirement for diminishing resistance as conventional teachers utilize remove conveyance frameworks lastly, staff state of mind toward the utilization of innovation. Furthermore, the results showed that media had assumed its part to make distance learning educators more dynamic, capable and concerned about their individual works. The study also indicated a high positive relationship between the media available at study centers and media used by the distance education. The challenge pointed out by the researcher was the clash of distance and time with communication as the life situations of every learner are varied. Recommendations included the realization of the duty of distance learning instructor to help students understand the effective use of media for their study lessons and also to develop online learning communities to be in instant connection with the students.Keywords: distance education, education, media, teaching and learning
Procedia PDF Downloads 141520 Delhi Metro: A Race towards Zero Emission
Authors: Pramit Garg, Vikas Kumar
Abstract:
In December 2015, all the members of the United Nations Framework Convention on Climate Change (UNFCCC) unanimously adopted the historic Paris Agreement. As per the convention, 197 countries have followed the guidelines of the agreement and have agreed to reduce the use of fossil fuels and also reduce the carbon emission to reach net carbon neutrality by 2050 and reduce the global temperature by 2°C by the year 2100. Globally, transport accounts for 23% of the energy-related CO2 that feeds global warming. Decarbonization of the transport sector is an essential step towards achieving India’s nationally determined contributions and net zero emissions by 2050. Metro rail systems are playing a vital role in the decarbonization of the transport sector as they create metro cities for the “21st-century world” that could ensure “mobility, connectivity, productivity, safety and sustainability” for the populace. Metro rail was introduced in Delhi in 2002 to decarbonize Delhi-National Capital Region and to provide a sustainable mode of public transportation. Metro Rail Projects significantly contribute to pollution reduction and are thus a prerequisite for sustainable development. The Delhi Metro is the 1ˢᵗ metro system in the world to earn carbon credits from Clean Development Mechanism (CDM) projects registered under United Nations Framework Convention on Climate Change. A good Metro Project with reasonable network coverage attracts a modal shift from various private modes and hence fewer vehicles on the road, thus restraining the pollution at the source. The absence of Greenhouse Gas emissions from the vehicle of modal shift passengers and lower emissions due to decongested roads contribute to the reduction in Green House Gas emissions and hence overall reduction in atmospheric pollution. The reduction in emission during the horizon year 2002 to 2019 has been estimated using emission standards and deterioration factor(s) for different categories of vehicles. Presently, our results indicate that the Delhi Metro system has reduced approximately 17.3% of motorized trips by road resulting in an emission reduction significantly. Overall, Delhi Metro, with an immediate catchment area of 17% of the National Capital Territory of Delhi (NCTD), is helping today to reduce 387 tonnes of emissions per day and 141.2 ktonnes of emissions yearly. The findings indicate that the Metro rail system is driving cities towards a more livable environment.Keywords: Delhi metro, GHG emission, sustainable public transport, urban transport
Procedia PDF Downloads 125519 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations
Authors: Priyanka Bharti
Abstract:
Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.Keywords: cognition, visual, decision making, graphics, recognition
Procedia PDF Downloads 268518 Morphological Process of Villi Detachment Assessed by Computer-Assisted 3D Reconstruction of Intestinal Crypt from Serial Ultrathin Sections of Rat Duodenum Mucosa
Authors: Lise P. Labéjof, Ivna Mororó, Raquel G. Bastos, Maria Isabel G. Severo, Arno H. de Oliveira
Abstract:
This work presents an alternative mode of intestine mucosa renewal that may allow to better understand the total loss of villi after irradiation. It was tested a morphological method of 3d reconstruction using micrographs of serial sections of rat duodenum. We used hundreds of sections of each specimen of duodenum placed on glass slides and examined under a light microscope. Those containing the detachment, approximately a dozen, were chosen for observation under a transmission electron microscope (TEM). Each of these sections was glued on a block of epon resin and recut into a hundred of 60 nm-thick sections. Ribbons of these ultrathin sections were distributed on a series of copper grids in the same order of appearance than during the process of microstomia. They were then stained by solutions of uranyl and lead salts and observed under a TEM. The sections were pictured and the electron micrographs showing signs of cells detachment were transferred into two softwares, ImageJ to align the cellular structures and Reconstruct to realize the 3d reconstruction. It has been detected epithelial cells that exhibited all signs of programmed cell death and localized at the villus-crypt junction. Their nucleus was irregular in shape with a condensed chromatin in clumps. Their cytoplasm was darker than that of neighboring cells, containing many swollen mitochondria. In some places of the sections, we could see intercellular spaces enlarged by the presence of shrunk cells which displayed a plasma membrane with an irregular shape in thermowell as if the cell interdigitations would distant from each other. The three-dimensional reconstruction of the crypts has allowed observe gradual loss of intercellular contacts of crypt cells in the longitudinal plan of the duodenal mucosa. In the transverse direction, there was a gradual increase of the intercellular space as if these cells moved away from one another. This observation allows assume that the gradual remoteness of the cells at the villus-crypt junction is the beginning of the mucosa detachment. Thus, the shrinking of cells due to apoptosis is the way that they detach from the mucosa and progressively the villi also. These results are in agreement with our initial hypothesis and thus have demonstrated that the villi become detached from the mucosa at the villus-crypt junction by the programmed cell death process. This type of loss of entire villus helps explain the rapid denudation of the intestinal mucosa in case of irradiation.Keywords: 3dr, transmission electron microscopy, ionizing radiations, rat small intestine, apoptosis
Procedia PDF Downloads 378517 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 240