Search results for: insulating glass units
75 Development of an Systematic Design in Evaluating Force-On-Force Security Exercise at Nuclear Power Plants
Authors: Seungsik Yu, Minho Kang
Abstract:
As the threat of terrorism to nuclear facilities is increasing globally after the attacks of September 11, we are striving to recognize the physical protection system and strengthen the emergency response system. Since 2015, Korea has implemented physical protection security exercise for nuclear facilities. The exercise should be carried out with full cooperation between the operator and response forces. Performance testing of the physical protection system should include appropriate exercises, for example, force-on-force exercises, to determine if the response forces can provide an effective and timely response to prevent sabotage. Significant deficiencies and actions taken should be reported as stipulated by the competent authority. The IAEA(International Atomic Energy Agency) is also preparing force-on-force exercise program documents to support exercise of member states. Currently, ROK(Republic of Korea) is implementing exercise on the force-on-force exercise evaluation system which is developed by itself for the nuclear power plant, and it is necessary to establish the exercise procedure considering the use of the force-on-force exercise evaluation system. The purpose of this study is to establish the work procedures of the three major organizations related to the force-on-force exercise of nuclear power plants in ROK, which conduct exercise using force-on-force exercise evaluation system. The three major organizations are composed of licensee, KINAC (Korea Institute of Nuclear Nonproliferation and Control), and the NSSC(Nuclear Safety and Security Commission). Major activities are as follows. First, the licensee establishes and conducts an exercise plan, and when recommendations are derived from the result of the exercise, it prepares and carries out a force-on-force result report including a plan for implementation of the recommendations. Other detailed tasks include consultation with surrounding units for adversary, interviews with exercise participants, support for document evaluation, and self-training to improve the familiarity of the MILES (Multiple Integrated Laser Engagement System). Second, KINAC establishes a force-on-force exercise plan review report and reviews the force-on-force exercise plan report established by licensee. KINAC evaluate force-on-force exercise using exercise evaluation system and prepare training evaluation report. Other detailed tasks include MILES training, adversary consultation, management of exercise evaluation systems, and analysis of exercise evaluation results. Finally, the NSSC decides whether or not to approve the force-on-force exercise and makes a correction request to the nuclear facility based on the exercise results. The most important part of ROK's force-on-force exercise system is the analysis through the exercise evaluation system implemented by KINAC after the exercise. The analytical method proceeds in the order of collecting data from the exercise evaluation system and analyzing the collected data. The exercise application process of the exercise evaluation system introduced in ROK in 2016 will be concretely set up, and a system will be established to provide objective and consistent conclusions between exercise sessions. Based on the conclusions drawn up, the ultimate goal is to complement the physical protection system of licensee so that the system makes licensee respond effectively and timely against sabotage or unauthorized removal of nuclear materials.Keywords: Force-on-Force exercise, nuclear power plant, physical protection, sabotage, unauthorized removal
Procedia PDF Downloads 14174 Compromising Quality of Life in Low Income Settlement's: The Case of Ashrayan Prakalpa, Khulna
Authors: Salma Akter, Md. Kamal Uddin
Abstract:
This study aims to demonstrate how top-down shelter policy and its resultant dwelling environment leads to ‘everyday compromise’ by the grassroots according to subjective (satisfaction) and objective (physical design elements and physical environmental elements) indicators, which are measured across three levels of the settlement; macro (Community), meso (Neighborhood or shelter/built environment) and micro (family). Ashrayan Prakalpa is a resettlement /housing project of Government of Bangladesh for providing shelters and human resources development activities like education, microcredit, and training programme to landless, homeless and rootless people. Despite the integrated nature of the shelter policies (comprises poverty alleviation, employment opportunity, secured tenure, and livelihood training), the ‘quality of life’ issue at the different levels of settlements becomes questionable. As dwellers of shelter units (although formally termed as ‘barracks’ rather shelter or housing) remain on the receiving end of government’s resettlement policies, they often involve with spatial-physical and socio-economic negotiation and assume curious forms of spatial practice, which often upholds contradiction with policy planning. Thus, policy based shelter force dwellers to persistently compromise with their provided built environments both in overtly and covertly. Compromising with prescribed designed space and facilities across living places articulated their negotiation with the quality of allocated space, built form and infrastructures, which in turn exert as less quality of life. The top-down shelter project, Dakshin Chandani Mahal Ashrayan Prakalpa at Dighalia Upazila, the study area located at the Eastern fringe area of Khulna, Bangladesh, is still in progress to resettle internally displaced and homeless people. In terms of methodology, this research is primarily exploratory and adopts a case study method, and an analytical framework is developed through the deductive approach for evaluating the quality of life. Secondary data have been obtained from housing policy analysis and relevant literature review, while key informant interview, focus group discussion, necessary drawings and photographs and participant observation across dwelling, neighborhood, and community level have also been administered as primary data collection methodology. Findings have revealed that various shortages, inadequacies, and negligence of policymakers force to compromise with allocated designed space, physical infrastructure and economic opportunities across dwelling, neighborhood and mostly community level. Thus, the outcome of this study can be beneficial for a global-level understating of the compromising the ‘quality of life’ under top-down shelter policy. Locally, for instance, in the context of Bangladesh, it can help policymakers and concerned authorities to formulate the shelter policies and take initiatives to improve the well-being of marginalized.Keywords: Ashrayan Prakalpa, compromise, displaced people, quality of life
Procedia PDF Downloads 15173 Statistical Models and Time Series Forecasting on Crime Data in Nepal
Authors: Dila Ram Bhandari
Abstract:
Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.Keywords: time series analysis, forecasting, ARIMA, machine learning
Procedia PDF Downloads 16472 Welfare and Sustainability in Beef Cattle Production on Tropical Pasture
Authors: Andre Pastori D'Aurea, Lauriston Bertelli Feranades, Luis Eduardo Ferreira, Leandro Dias Pinto, Fabiana Ayumi Shiozaki
Abstract:
The aim of this study was to improve the production of beef cattle on tropical pasture without harming this environment. On tropical pastures, cattle's live weight gain is lower than feedlot, and forage production is seasonable, changing from season to season. Thus, concerned with sustainable livestock production, the Premix Company has developed strategies to improve the production of beef cattle on tropical pasture to ensure sustainability of welfare and production. There are two important principles in this productivity system: 1) increase individual gains with use of better supplementation and 2) increase the productivity units with better forage quality like corn silage or other forms of forage conservations, actually used only in winter, and adding natural additives in the diet. This production system was applied from June 2017 to May 2018 in the Research Center of Premix Company, Patrocínio Paulista, São Paulo State, Brazil. The area used had 9 hectares of pasture of Brachiaria brizantha. 36 steers Nellore were evaluated for one year. The initial weight was 253 kg. The parameters used were daily average gain and gain per area. This indicated the corrections to be made and helped design future fertilization. In this case, we fertilized the pasture with 30 kg of nitrogen per animal divided into two parts. The diet was pasture and protein-energy supplements (0.4% of live weight). The supplement used was added with natural additive Fator P® – Premix Company). Fator P® is an additive composed by amino acids (lysine, methionine and tyrosine, 16400, 2980 and 3000 mg.kg-1 respectively), minerals, probiotics (Saccharomyces cerevisiae, 7 x 10E8 CFU.kg-1) and essential fatty acids (linoleic and oleic acids, 108.9 and 99g.kg-1 respectively). Due to seasonal changes, in the winter we supplemented the diet by increasing the offer of forage, supplementing with maize silage. It was offered 1% of live weight in silage corn and 0.4% of the live weight in protein-energetic supplements with additive Fator P ®. At the end of the period, the productivity was calculated by summing the individual gains for the area used. The average daily gain of the animals were 693 grams per day and was produced 1.005 kg /hectare/year. This production is about 8 times higher than the average of Brazilian meat national production. To succeed in this project, it is necessary to increase the gains per area, so it is necessary to increase the capacity per area. Pasture management is very important to the project's success because the dietary decisions were taken from the quantity and quality of the forage. We, therefore, recommend the use of animals in the growth phase because the response to supplementation is greater in that phase and we can allocate more animals per area. This system's carbon footprint reduces emissions by 61.2 percent compared to the Brazilian average. This beef cattle production system can be efficient and environmentally friendly to the natural. Another point is that bovines will benefit from their natural environment without competing or having an impact on human food production.Keywords: cattle production, environment, pasture, sustainability
Procedia PDF Downloads 14871 Sustainable Urbanism: Model for Social Equity through Sustainable Development
Authors: Ruchira Das
Abstract:
The major Metropolises of India are resultant of Colonial manifestation of Production, Consumption and Sustenance. These cities grew, survived, and sustained on the basic whims of Colonial Power and Administrative Agendas. They were symbols of power, authority and administration. Within them some Colonial Towns remained as small towns within the close vicinity of the major metropolises and functioned as self–sufficient units until peripheral development due to tremendous pressure occurred in the metropolises. After independence huge expansion in Judiciary and Administration system resulted City Oriented Employment. A large number of people started residing within the city or within commutable distance of the city and it accelerated expansion of the cities. Since then Budgetary and Planning expenditure brought a new pace in Economic Activities. Investment in Industry and Agriculture sector generated opportunity of employment which further led towards urbanization. After two decades of Budgetary and Planning economic activities in India, a new era started in metropolitan expansion. Four major metropolises started further expansion rapidly towards its suburbs. A concept of large Metropolitan Area developed. Cities became nucleus of suburbs and rural areas. In most of the cases such expansion was not favorable to the relationship between City and its hinterland due to absence of visualization of Compact Sustainable Development. The search for solutions needs to weigh the choices between Rural and Urban based development initiatives. Policymakers need to focus on areas which will give the greatest impact. The impact of development initiatives will spread the significant benefit to all. There is an assumption that development integrates Economic, Social and Environmental considerations with equal weighing. The traditional narrower and almost exclusive focus on economic criteria as the determinant of the level of development is thus re–described and expanded. The Social and Environmental aspects are equally important as Economic aspect to achieve Sustainable Development. The arrangement of opportunities for Public, Semi – Public facilities for its citizen is very much relevant to development. It is responsibility of the administration to provide opportunities for the basic requirement of its inhabitants. Development should be in terms of both Industrial and Agricultural to maintain a balance between city and its hinterland. Thus, policy is to formulate shifting the emphasis away from Economic growth towards Sustainable Human Development. The goal of Policymaker should aim at creating environments in which people’s capabilities can be enhanced by the effective dynamic and adaptable policy. The poverty could not be eradicated simply by increasing income. The improvement of the condition of the people would have to lead to an expansion of basic human capabilities. In this scenario the suburbs/rural areas are considered as environmental burden to the metropolises. A new living has to be encouraged in the suburban or rural. We tend to segregate agriculture from the city and city life, this leads to over consumption, but this urbanism model attempts both these to co–exists and hence create an interesting overlapping of production and consumption network towards sustainable Rurbanism.Keywords: socio–economic progress, sustainability, social equity, urbanism
Procedia PDF Downloads 30670 [Keynote Talk]: Surveillance of Food Safety Compliance of Hong Kong Street Food
Authors: Mabel Y. C. Yau, Roy C. F. Lai, Hugo Y. H. Or
Abstract:
This study is a pilot surveillance of hygiene compliance and food microbial safety of both licensed and mobile vendors selling Chinese ready–to-eat snack foods in Hong Kong. The study reflects similar situations in running mobile food vending business on trucks. Hong Kong is about to launch the Food Truck Pilot Scheme by the end of 2016 or early 2017. Technically, selling food on the vehicle is no different from hawking food on the street or vending food on the street. Each type of business bears similar food safety issues and cast the same impact on public health. Present findings demonstrate exemplarily situations that also apply to food trucks. 9 types of Cantonese style snacks of 32 samples in total were selected for microbial screening. A total of 16 vending sites including supermarkets, street markets, and snack stores were visited. The study finally focused on a traditional snack, the steamed rice cake with red beans called Put Chai Ko (PCK). PCK is a type of classical Cantonese pastry sold on push carts on the street. It used to be sold at room temperature and served with bamboo sticks in the old days. Some shops would have them sold steam fresh. Microbial examinations on aerobic counts, yeast, and mould, coliform, salmonella as well as Staphylococcus aureus detections were carried out. Salmonella was not detected in all samples. Since PCK does not contain ingredients of beef, poultry, eggs or dairy products, the risk of the presence of Salmonella in PCK was relatively lower although other source of contamination might be possible. Coagulase positive Staphylococcus aureus was found in 6 of the 14 samples sold at room temperature. Among these 6 samples, 3 were PCK. One of the samples was in an unacceptable range of total colony forming units higher than 105. The rest were only satisfactory. Observational evaluations were made with checklists on personal hygiene, premises hygiene, food safety control, food storage, cleaning and sanitization as well as waste disposals. The maximum score was 25 if total compliance were obtained. The highest score among vendors was 20. Three stores were below average, and two of these stores were selling PCK. Most of the non-compliances were on food processing facilities, sanitization conditions and waste disposal. In conclusion, although no food poisoning outbreaks happened during the time of the investigation, the risk of food hazard existed in these stores, especially among street vendors. Attention is needed in the traditional practice of food selling, and that food handlers might not have sufficient knowledge to properly handle food products. Variations in food qualities existed among supply chains or franchise eateries or shops. It was commonly observed that packaging and storage conditions are not properly enforced in the retails. The same situation could be reflected across the food business. It did indicate need of food safety training in the industry and loopholes in quality control among business.Keywords: cantonese snacks, food safety, microbial, hygiene, street food
Procedia PDF Downloads 30269 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports
Authors: Jonardan Koner, Avinash Purandare
Abstract:
In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders
Procedia PDF Downloads 13168 Measuring Urban Sprawl in the Western Cape Province, South Africa: An Urban Sprawl Index for Comparative Purposes
Authors: Anele Horn, Amanda Van Eeden
Abstract:
The emphasis on the challenges posed by continued urbanisation, especially in developing countries has resulted in urban sprawl often researched and analysed in metropolitan urban areas, but rarely in small and medium towns. Consequently, there exists no comparative instrument between the proportional extent of urban sprawl in metropolitan areas measured against that of small and medium towns. This research proposes an Urban Sprawl Index as a possible tool to comparatively analyse the extent of urban sprawl between cities and towns of different sizes. The index can also be used over the longer term by authorities developing spatial policy to track the success or failure of specific tools intended to curb urban sprawl. In South Africa, as elsewhere in the world, the last two decades witnessed a proliferation of legislation and spatial policies to limit urban sprawl and contain the physical expansion and development of urban areas, but the measurement of the successes or failures of these instruments intending to curb expansive land development has remained a largely unattainable goal, largely as a result of the absence of an appropriate measure of proportionate comparison. As a result of the spatial political history of Apartheid, urban areas acquired a spatial form that contributed to the formation of single-core cities with far reaching and wide-spreading peripheral development, either in the form of affluent suburbs or as a result of post-Apartheid programmes such as the Reconstruction and Development Programme (1995) which, in an attempt to assist the immediate housing shortage, favoured the establishment of single dwelling residential units for low income communities on single plots on affordable land at the urban periphery. This invariably contributed to urban sprawl and even though this programme has since been abandoned, the trend towards low density residential development continues. The research area is the Western Cape Province in South Africa, which in all aspects exhibit the spatial challenges described above. In academia and popular media the City of Cape Town (the only Metropolitan authority in the province) has received the lion’s share of focus in terms of critique on urban development and spatial planning, however, the smaller towns and cities in the Western Cape arguably received much less public attention and were spared the naming and shaming of being unsustainable urban areas in terms of land consumption and physical expansion. The Urban Sprawl Index for the Western Cape (USIWC) put forward by this research enables local authorities in the Western Cape Province to measure the extent of urban sprawl proportionately and comparatively to other cities in the province, thereby acquiring a means of measuring the success of the spatial instruments employed to limit urban expansion and inefficient land consumption. In development of the USIWC the research made use of satellite data for reference years 2001 and 2011 and population growth data extracted from the national census, also for base years 2001 and 2011.Keywords: urban sprawl, index, Western Cape, South Africa
Procedia PDF Downloads 32967 Sustainable Design Criteria for Beach Resorts to Enhance Physical Activity That Helps Improve Health and Well-being for Adults in Saudi Arabia
Authors: Noorh Albadi, Salha Khayyat
Abstract:
People's moods and well-being are affected by their environment. The built environment impacts one's level of activity and health. In order to enhance users' physical health, sustainable design strategies have been developed for the physical environment to improve users' health. This study aimed to determine whether adult resorts in Saudi Arabia meet standards that ensure physical wellness to identify the needed requirements. It will be significant to the Ministry of Tourism, Sports, developers, and designers. Physical activity affects human health physically and mentally. In Saudi Arabia, the percentage of people who practiced sports in the Kingdom in 2019 was 20.04% - males and females older than 15. On the other hand, there is a lack of physical activity in Saudi Arabia; 90% of the Kingdom's population spends more than two hours sitting down without moving, which puts them at risk of contracting a non-communicable disease. The lack of physical activity and movement led to an increase in the rate of obesity among Saudis by 59% in 2020 and consequently could cause chronic diseases or death. The literature generally endorses that leading an active lifestyle improves physical health and affects mental health. Therefore, the United Nations has set 17 sustainable development goals (SDGs) to ensure healthy lives and promote well-being for all ages. One of SDG3's targets is reducing mortality, which can be achieved by raising physical activity. In order to support sustainable design, many rating systems and strategies have been developed, such as WELL building, Leadership in Energy and Environmental Design, (LEED), Active design strategies, and RIPA plan of work. The survey was used to gather qualitative and quantitative information. It was designed based on the Active Design and WELL building theories targeting beach resorts visitors, professional and beginner athletes, and non-athletics to ask them about the beach resorts they visited in the Kingdom and whether they met the criteria of sports resorts and healthy and active design theories, in addition to gathering information about the preferences of physical activities in the Saudi society in terms of the type of activities that young people prefer, where they prefer to engage in and under any thermal and light conditions. The final section asks about the design of residential units in beach sports resorts, the data collected from 127 participants. Findings revealed that participants prefer outdoor activities in moderate weather and sunlight or the evening with moderate and sufficient lighting and that no beach sports resorts in the country are constructed to support sustainable design criteria for physical activity. Participants agreed that several measures that lessen tension at beach resorts and enhance movement and activity are needed by Saudi society. The study recommends designing resorts that meet the sustainable design criteria regarding physical activity in Saudi Arabia to increase physical activity to achieve psychological and physical benefits and avoid psychological and physical diseases related to physical inactivity.Keywords: sustainable design, SDGs, active design strategies, well building, beach resort design
Procedia PDF Downloads 12066 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 13165 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level
Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown
Abstract:
‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.Keywords: data integration, data linkage, health planning, health services research
Procedia PDF Downloads 21664 Microfluidic Plasmonic Device for the Sensitive Dual LSPR-Thermal Detection of the Cardiac Troponin Biomarker in Laminal Flow
Authors: Andreea Campu, Ilinica Muresan, Simona Cainap, Simion Astilean, Monica Focsan
Abstract:
Acute myocardial infarction (AMI) is the most severe cardiovascular disease, which has threatened human lives for decades, thus a continuous interest is directed towards the detection of cardiac biomarkers such as cardiac troponin I (cTnI) in order to predict risk and, implicitly, fulfill the early diagnosis requirements in AMI settings. Microfluidics is a major technology involved in the development of efficient sensing devices with real-time fast responses and on-site applicability. Microfluidic devices have gathered a lot of attention recently due to their advantageous features such as high sensitivity and specificity, miniaturization and portability, ease-of-use, low-cost, facile fabrication, and reduced sample manipulation. The integration of gold nanoparticles into the structure of microfluidic sensors has led to the development of highly effective detection systems, considering the unique properties of the metallic nanostructures, specifically the Localized Surface Plasmon Resonance (LSPR), which makes them highly sensitive to their microenvironment. In this scientific context, herein, we propose the implementation of a novel detection device, which successfully combines the efficiency of gold bipyramids (AuBPs) as signal transducers and thermal generators with the sample-driven advantages of the microfluidic channels into a miniaturized, portable, low-cost, specific, and sensitive test for the dual LSPR-thermographic cTnI detection. Specifically, AuBPs with longitudinal LSPR response at 830 nm were chemically synthesized using the seed-mediated growth approach and characterized in terms of optical and morphological properties. Further, the colloidal AuBPs were deposited onto pre-treated silanized glass substrates thus, a uniform nanoparticle coverage of the substrate was obtained and confirmed by extinction measurements showing a 43 nm blue-shift of the LSPR response as a consequence of the refractive index change. The as-obtained plasmonic substrate was then integrated into a microfluidic “Y”-shaped polydimethylsiloxane (PDMS) channel, fabricated using a Laser Cutter system. Both plasmonic and microfluidic elements were plasma treated in order to achieve a permanent bond. The as-developed microfluidic plasmonic chip was further coupled to an automated syringe pump system. The proposed biosensing protocol implicates the successive injection inside the microfluidic channel as follows: p-aminothiophenol and glutaraldehyde, to achieve a covalent bond between the metallic surface and cTnI antibody, anti-cTnI, as a recognition element, and target cTnI biomarker. The successful functionalization and capture of cTnI was monitored by LSPR detection thus, after each step, a red-shift of the optical response was recorded. Furthermore, as an innovative detection technique, thermal determinations were made after each injection by exposing the microfluidic plasmonic chip to 785 nm laser excitation, considering that the AuBPs exhibit high light-to-heat conversion performances. By the analysis of the thermographic images, thermal curves were obtained, showing a decrease in the thermal efficiency after the anti-cTnI-cTnI reaction was realized. Thus, we developed a microfluidic plasmonic chip able to operate as both LSPR and thermal sensor for the detection of the cardiac troponin I biomarker, leading thus to the progress of diagnostic devices.Keywords: gold nanobipyramids, microfluidic device, localized surface plasmon resonance detection, thermographic detection
Procedia PDF Downloads 12963 Effects of Live Webcast-Assisted Teaching on Physical Assessment Technique Learning of Young Nursing Majors
Authors: Huey-Yeu Yan, Ching-Ying Lee, Hung-Ru Lin
Abstract:
Background: Physical assessment is a vital clinical nursing competence. The gap between conventional teaching method and the way e-generation students’ preferred could be bridged owing to the support of Internet technology, i.e. interacting with online media to manage learning works. Nursing instructors in the wake of new learning pattern of the e-generation students are challenged to actively adjust and make teaching contents and methods more versatile. Objective: The objective of this research is to explore the effects on teaching and learning with live webcast-assisted on a specific topic, Physical Assessment technique, on a designated group of young nursing majors. It’s hoped that, with a way of nursing instructing, more versatile learning resources may be provided to facilitate self-directed learning. Design: This research adopts a cross-sectional descriptive survey. The instructor demonstrated physical assessment techniques and operation procedures via live webcast broadcasted online to all students. It increased both the off-time interaction between teacher and students concerning teaching materials. Methods: A convenient sampling was used to recruit a total of 52 nursing-majors at a certain university. The nursing majors took two-hour classes of Physical Assessment per week for 18 weeks (36 hrs. in total). The instruction covered four units with live webcasting and then conducted an online anonymous survey of learning outcomes by questionnaire. The research instrument was the online questionnaire, covering three major domains—online media used, learning outcome evaluation and evaluation result. The data analysis was conducted via IBM SPSS Statistics Version 2.0. The descriptive statistics was undertaken to describe the analysis of basic data and learning outcomes. Statistical methods such as descriptive statistics, t-test, ANOVA, and Pearson’s correlation were employed in verification. Results: Results indicated the following five major findings. (1) learning motivation, about four fifth of the participants agreed the online instruction resources are very helpful in improving learning motivation and raising the learning interest. (2) learning needs, about four fifth of participants agreed it was helpful to plan self-directed practice after the instruction, and meet their needs of repetitive learning and/or practice at their leisure time. (3) learning effectiveness, about two third agreed it was helpful to reduce pre-exam anxiety, and improve their test scores. (4) course objects, about three fourth agreed that it was helpful to achieve the goal of ‘executing the complete Physical Assessment procedures with proper skills’. (5) finally, learning reflection, about all of participants agreed this experience of online instructing, learning, and practicing is beneficial to them, they recommend instructor to share with other nursing majors, and they will recommend it to fellow students too. Conclusions: Live webcasting is a low-cost, convenient, efficient and interactive resource to facilitate nursing majors’ motivation of learning, need of self-directed learning and practice, outcome of learning. When live webcasting is integrated into nursing teaching, it provides an opportunity of self-directed learning to promote learning effectiveness, as such to fulfill the teaching objective.Keywords: innovative teaching, learning effectiveness, live webcasting, physical assessment technique
Procedia PDF Downloads 13262 Well-being of Parents of Children with Autism Spectrum Disorder or Developmental Coordination Disorder: Cross-Cultural and Cross-disorder Comparative Studies
Authors: Léa Chawki, Émilie Cappe
Abstract:
Context: Nowadays, supporting parents of children with autism spectrum disorder (ASD) and helping them adjust to their child’s condition represents a core clinical and scientific necessity and is encouraged by the French National Strategy for Autism (2018). In France, ASD remains a challenging condition, causing distress, segregation and social stigma to concerned family members concerned by this handicap. The literature highlights that neurodevelopmental disorders in children, such as ASD, influence parental well-being. This impact could be different according to parents’ culture and the child’s particular disorder manifestation, such as developmental coordination disorder (DCC), for instance. Objectives: This present study aims to explore parental stress, anxiety and depressive symptoms, as well as the quality of life in parents of children with ASD or DCD, as well as the explicit individual, psychosocial and cultural factors of parental well-being. Methods: Participants will be recruited through diagnostic centers, child and specialized adolescent units, and organizations representing families with ASD and DCD. Our sample will include five groups of 150 parents: four groups of parents having children with ASD – one living in France, one in the US, one in Canada and the other in Lebanon – and one group of French parents of children with DCD. Self-evaluation measures will be filled directly by parents in order to measure parental stress, anxiety and depressive symptoms, quality of life, coping and emotional regulation strategies, internalized stigma, perceived social support, the child’s problem behaviors severity, as well as motor coordination deficits in children with ASD and DCD. A sociodemographic questionnaire will help collect additional useful data regarding participants and their children. Individual and semi-structured research interviews will be conducted to complete quantitative data by further exploring participants’ distinct experiences related to parenting a child with a neurodevelopmental disorder. An interview grid, specially designed for the needs of this study, will strengthen the comparison between the experiences of parents of children with ASD with those of parents of children with DCD. It will also help investigate cultural differences regarding parent support policies in the context of raising a child with ASD. Moreover, interviews will help clarify the link between certain research variables (behavioral differences between ASD and DCD, family leisure activities, family and children’s extracurricular life, etc.) and parental well-being. Research perspectives: Results of this study will provide a more holistic understanding of the roles of individual, psychosocial and cultural variables related to parental well-being. Thus, this study will help direct the implementation of support services offered to families of children with neurodevelopmental disorders (ASD and DCD). Also, the implications of this study are essential in order to guide families through changes related to public policies assisting neurodevelopmental disorders and other disabilities. The between-group comparison (ASD and DCD) is also expected to help clarify the origins of all the different challenges encountered by those families. Hence, it will be interesting to investigate whether complications perceived by parents are more likely to arise from child-symptom severity, or from the lack of support obtained from health and educational systems.Keywords: Autism spectrum disorder, cross-cultural, cross-disorder, developmental coordination delay, well-being
Procedia PDF Downloads 10061 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand
Authors: Wen Liu, Errol Haarhoff, Lee Beattie
Abstract:
In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.Keywords: urban intensification, sustainable development, plan making, governance and implementation
Procedia PDF Downloads 55660 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 8859 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework
Authors: Robert Pocklington
Abstract:
Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language
Procedia PDF Downloads 11058 Crustal Scale Seismic Surveys in Search for Gawler Craton Iron Oxide Cu-Au (IOCG) under Very Deep Cover
Authors: E. O. Okan, A. Kepic, P. Williams
Abstract:
Iron oxide copper gold (IOCG) deposits constitute important sources of copper and gold in Australia especially since the discovery of the supergiant Olympic Dam deposits in 1975. They are considered to be metasomatic expressions of large crustal-scale alteration events occasioned by intrusive actions and are associated with felsic igneous rocks in most cases, commonly potassic igneous magmatism, with the deposits ranging from ~2.2 –1.5 Ga in age. For the past two decades, geological, geochemical and potential methods have been used to identify the structures hosting these deposits follow up by drilling. Though these methods have largely been successful for shallow targets, at deeper depth due to low resolution they are limited to mapping only very large to gigantic deposits with sufficient contrast. As the search for ore-bodies under regolith cover continues due to depletion of the near surface deposits, there is a compelling need to develop new exploration technology to explore these deep seated ore-bodies within 1-4km which is the current mining depth range. Seismic reflection method represents this new technology as it offers a distinct advantage over all other geophysical techniques because of its great depth of penetration and superior spatial resolution maintained with depth. Further, in many different geological scenarios, it offers a greater ‘3D mapability’ of units within the stratigraphic boundary. Despite these superior attributes, no arguments for crustal scale seismic surveys have been proposed because there has not been a compelling argument of economic benefit to proceed with such work. For the seismic reflection method to be used at these scales (100’s to 1000’s of square km covered) the technical risks or the survey costs have to be reduced. In addition, as most IOCG deposits have large footprint due to its association with intrusions and large fault zones; we hypothesized that these deposits can be found by mainly looking for the seismic signatures of intrusions along prospective structures. In this study, we present two of such cases: - Olympic Dam and Vulcan iron-oxide copper-gold (IOCG) deposits all located in the Gawler craton, South Australia. Results from our 2D modelling experiments revealed that seismic reflection surveys using 20m geophones and 40m shot spacing as an exploration tool for locating IOCG deposit is possible even when hosted in very complex structures. The migrated sections were not only able to identify and trace various layers plus the complex structures but also show reflections around the edges of intrusive packages. The presences of such intrusions were clearly detected from 100m to 1000m depth range without losing its resolution. The modelled seismic images match the available real seismic data and have the hypothesized characteristics; thus, the seismic method seems to be a valid exploration tool to find IOCG deposits. We therefore propose that 2D seismic survey is viable for IOCG exploration as it can detect mineralised intrusive structures along known favourable corridors. This would help in reducing the exploration risk associated with locating undiscovered resources as well as conducting a life-of-mine study which will enable better development decisions at the very beginning.Keywords: crustal scale, exploration, IOCG deposit, modelling, seismic surveys
Procedia PDF Downloads 32557 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 14556 The Importance of Fruit Trees for Prescribed Burning in a South American Savanna
Authors: Rodrigo M. Falleiro, Joaquim P. L. Parime, Luciano C. Santos, Rodrigo D. Silva
Abstract:
The Cerrado biome is the most biodiverse savanna on the planet. Located in central Brazil, its preservation is seriously threatened by the advance of intensive agriculture and livestock. Conservation Units and Indigenous Lands are increasingly isolated and subject to mega wildfires. Among the characteristics of this savanna, we highlight the high rate of primary biomass production and the reduced occurrence of large grazing animals. In this biome, the predominant fauna is more dependent on the fruits produced by the dicotyledonous species in relation to other tropical savannas. Fire is a key element in the balance between mono and dicotyledons or between the arboreal and herbaceous strata. Therefore, applying fire regimes that maintain the balance between these strata without harming fruit production is essential in the conservation strategies of Cerrado's biodiversity. Recently, Integrated Fire Management has started to be implemented in Brazilian protected areas. As a result, management with prescribed burns has increasingly replaced strategies based on fire exclusion, which in practice have resulted in large wildfires, with highly negative impacts on fruit and fauna production. In the Indigenous Lands, these fires were carried out respecting traditional knowledge. The indigenous people showed great concern about the effects of fire on fruit plants and important animals. They recommended that the burns be carried out between April and May, as it would result in a greater production of edible fruits ("fruiting burning"). In other tropical savannas in the southern hemisphere, the preferential period tends to be later, in the middle of the dry season, when the grasses are dormant (June to August). However, in the Cerrado, this late period coincides with the flowering and sprouting of several important fruit species. To verify the best burning season, the present work evaluated the effects of fire on flowering and fruit production of theByrsonima sp., Mouriri pusa, Caryocar brasiliense, Anacardium occidentale, Pouteria ramiflora, Hancornia speciosa, Byrsonima verbascifolia, Anacardium humille and Talisia subalbens. The evaluations were carried out in the field, covering 31 Indigenous Lands that cover 104,241.18 Km², where 3,386 prescribed burns were carried out between 2015 and 2018. The burning periods were divided into early (carried out during the rainy season), modal or “fruiting” (carried out during the transition between seasons) and late (carried out in the middle of the dry season, when the grasses are dormant). The results corroborate the traditional knowledge, demonstrating that the modal burns result in higher rates of reproduction and fruit production. Late burns showed intermediate results, followed by early burns. We conclude that management strategies based mainly on forage production, which are usually applied in savannas populated by grazing ungulates, may not be the best management strategy for South American savannas. The effects of fire on fruit plants, which have a particular phenologicalsynchronization with the fauna cycle, also need to be observed during the prescription of burns.Keywords: cerrado biome, fire regimes, native fruits, prescribed burns
Procedia PDF Downloads 21755 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 9254 Neurodiversity in Post Graduate Medical Education: A Rapid Solution to Faculty Development
Authors: Sana Fatima, Paul Sadler, Jon Cooper, David Mendel, Ayesha Jameel
Abstract:
Background: Neurodiversity refers to intrinsic differences between human minds and encompasses dyspraxia, dyslexia, attention deficit hyperactivity disorder, dyscalculia, autism spectrum disorder, and Tourette syndrome. There is increasing recognition of neurodiversity in relation to disability/diversity in medical education and the associated impact on training, career progression, and personal and professional wellbeing. In addition, documented and anecdotal evidence suggests that medical educators and training providers in all four nations (UK) are increasingly concerned about understanding neurodiversity and identifying and providing support for neurodivergent trainees. Summary of Work: A national Neurodiversity Task and Finish group were established to survey Health Education England local office Professional Support teams about insights into infrastructure, training for educators, triggers for assessment, resources, and intervention protocols. This group drew from educational leadership, professional and personal neurodiverse expertise, occupational medicine, employer human resource, and trainees. An online, exploratory survey was conducted to gather insights from supervisors and trainers across England using the Professional Support Units' platform. Summary of Results: This survey highlighted marked heterogeneity in the identification, assessment, and approaches to support and management of neurodivergent trainees and highlighted a 'deficit' approach to neurodiversity. It also demonstrated a paucity of educational and protocol resources for educators and supervisors in supporting neurodivergent trainees. Discussions and Conclusions: In phase one, we focused on faculty development. An educational repository for all supervising trainees using a thematic approach was formalised. This was guided by our survey findings specific for neurodiversity and took a triple 'A' approach: awareness, assessment, and action. This is further supported by video material incorporating stories in training as well as mobile workshops for trainers for more immersive learning. The subtle theme from both the survey and Task and finish group suggested a move away from deficit-focused methods toward a positive holistic, interdisciplinary approach within a biopsychosocial framework. Contributions: 1. Faculty Knowledge and basic understanding of neurodiversity are key to supporting trainees with known or underlying Neurodiverse conditions. This is further complicated by challenges around non-disclosure, varied presentations, stigma, and intersectionality. 2. There is national (and international) inconsistency in the approach to how trainees are managed once a neurodiverse condition is suspected or diagnosed. 3. A carefully constituted and focussed Task and Finish group can rapidly identify national inconsistencies in neurodiversity and implement rapid educational interventions. 4. Nuanced findings from surveys and discussion can reframe the approach to neurodiversity; from a medical model to a more comprehensive, asset-based, biopsychosocial model of support, fostering a cultural shift, accepting 'diversity' in all its manifestations, visible and hidden.Keywords: neurodiversity, professional support, human considerations, workplace wellbeing
Procedia PDF Downloads 9153 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 38652 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 22251 Stakeholder Perception in the Role of Short-term Accommodations on the Place Brand and Real Estate Development of Urban Areas: A Case Study of Malate, Manila
Authors: Virgilio Angelo Gelera Gener
Abstract:
This study investigates the role of short-term accommodations on the place brand and real estate development of urban areas. It aims to know the perceptions of the general public, real estate developers, as well as city and barangay-level local government units (LGUs) on how these lodgings affect the place brand and land value of a community. It likewise attempts to identify the personal and institutional variables having a great influence on said perceptions in order to provide a better understanding of these establishments and their relevance within urban localities. Using certain sources, Malate, Manila was identified to be the ideal study area of the thesis. This prompted the employment of mixed methods research as the study’s fundamental data gathering and analytical tool. Here, a survey with 350 locals was done, asking them questions that would answer the aforementioned queries. Thereafter, a Pearson Chi-square Test and Multinomial Logistic Regression (MLR) were utilized to determine the variables affecting their perceptions. There were also Focus Group Discussions (FGDs) with the three (3) most populated Malate barangays, as well as Key Informant Interviews (KIIs) with selected city officials and fifteen (15) real estate company representatives. With that, survey results showed that although a 1992 Department of Tourism (DOT) Circular regards short-term accommodations as lodgings mainly for travelers, most people actually use it for their private/intimate moments. Because of this, the survey further revealed that short-term accommodations exhibit a negative place brand among the respondents though they also believe that it’s still one of society’s most important economic players. Statistics from the Pearson Chi-square Test, on the other hand, indicate that there are fourteen (14) out of seventeen (17) variables exhibiting great influence on respondents’ perceptions. Whereas MLR findings show that being born in Malate and being part of a family household was the most significant regardless of socio-economic level and monthly household income. For the city officials, it was revealed that said lodgings are actually the second-highest earners in the City’s lodging industry. It was further stated that their zoning ordinance treats short-term accommodations just like any other lodging enterprise. So it’s perfectly legal for these establishments to situate themselves near residential areas and/or institutional structures. A sit down with barangays, on the other hand, recognized the economic benefits of short-term accommodations but likewise admitted that it contributes a negative place brand to the community. Lastly, real estate developers are amenable to having their projects built near short-term accommodations, for they do not have any bad views against it. They explained that their projects sites have always been motivated by suitability, liability, and marketability factors only. Overall, these findings merit a recalibration of the zoning ordinance and DOT Circular, as well as the imposition of regulations on their sexually suggestive roadside advertisements. Then, once relevant measures are refined for proper implementation, it can also pave the way for spatial interventions (like visual buffer corridors) to better address the needs of the locals, private groups, and government.Keywords: estate planning, place brand, real estate development, short-term accommodations
Procedia PDF Downloads 16550 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 17749 Cyber-Victimization among Higher Education Students as Related to Academic and Personal Factors
Authors: T. Heiman, D. Olenik-Shemesh
Abstract:
Over the past decade, with the rapid growth of electronic communication, the internet and, in particular, social networking has become an inseparable part of people's daily lives. Along with its benefits, a new type of online aggression has emerged, defined as cyber bullying, a form of interpersonal aggressive behavior that takes place through electronic means. Cyber-bullying is characterized by repetitive behavior over time of maladaptive authority and power usage using computers and cell phones via sending insulting messages and hurtful pictures. Preliminary findings suggest that the prevalence of involvement in cyber-bullying among higher education students varies between 10 and 35%. As to date, universities are facing an uphill effort in trying to restrain online misbehavior. As no studies examined the relationships between cyber-bullying involvement with personal aspects, and its impacts on academic achievement and work functioning, this present study examined the nature of cyber-bullying involvement among 1,052 undergraduate students (mean age = 27.25, S.D = 4.81; 66.2% female), coping with, as well as the effects of social support, perceived self-efficacy, well-being, and body-perception, in relation to cyber-victimization. We assume that students in higher education are a vulnerable population and at high risk of being cyber-victims. We hypothesize that social support might serve as a protective factor and will moderate the relationships between the socio-emotional variables and the occurrence of cyber- victimization. The findings of this study will present the relationships between cyber-victimization and the social-emotional aspects, which constitute risk and protective factors. After receiving approval from the Ethics Committee of the University, a Google Drive questionnaire was sent to a random sample of students, studying in the various University study centers. Students' participation was voluntary, and they completed the five questionnaires anonymously: Cyber-bullying, perceived self-efficacy, subjective well-being, social support and body perception. Results revealed that 11.6% of the students reported being cyber-victims during last year. Examining the emotional and behavioral reactions to cyber-victimization revealed that female emotional and behavioral reactions were significantly greater than the male reactions (p < .001). Moreover, females reported on a significant higher social support compared to men; male reported significantly on a lower social capability than female; and men's body perception was significantly more positive than women's scores. No gender differences were observed for subjective well-being scale. Significant positive correlations were found between cyber-victimization and fewer friends, lower grades, and work ineffectiveness (r = 0.37- .40, p < 0 .001). The results of the Hierarchical regression indicated significantly that cyber-victimization can be predicted by lower social support, lower body perception, and gender (female), that explained 5.6% of the variance (R2 = 0.056, F(5,1047) = 12.47, p < 0.001). The findings deepen our understanding of the students' involvement in cyber-bullying, and present the relationships of the social-emotional and academic aspects on cyber-victim students. In view of our findings, higher education policy could help facilitate coping with cyber-bullying incidents, and student support units could develop intervention programs aimed at reducing cyber-bullying and its impacts.Keywords: academic and personal factors, cyber-victimization, social support, higher education
Procedia PDF Downloads 28948 A Vision-Based Early Warning System to Prevent Elephant-Train Collisions
Authors: Shanaka Gunasekara, Maleen Jayasuriya, Nalin Harischandra, Lilantha Samaranayake, Gamini Dissanayake
Abstract:
One serious facet of the worsening Human-Elephant conflict (HEC) in nations such as Sri Lanka involves elephant-train collisions. Endangered Asian elephants are maimed or killed during such accidents, which also often result in orphaned or disabled elephants, contributing to the phenomenon of lone elephants. These lone elephants are found to be more likely to attack villages and showcase aggressive behaviour, which further exacerbates the overall HEC. Furthermore, Railway Services incur significant financial losses and disruptions to services annually due to such accidents. Most elephant-train collisions occur due to a lack of adequate reaction time. This is due to the significant stopping distance requirements of trains, as the full braking force needs to be avoided to minimise the risk of derailment. Thus, poor driver visibility at sharp turns, nighttime operation, and poor weather conditions are often contributing factors to this problem. Initial investigations also indicate that most collisions occur in localised “hotspots” where elephant pathways/corridors intersect with railway tracks that border grazing land and watering holes. Taking these factors into consideration, this work proposes the leveraging of recent developments in Convolutional Neural Network (CNN) technology to detect elephants using an RGB/infrared capable camera around known hotspots along the railway track. The CNN was trained using a curated dataset of elephants collected on field visits to elephant sanctuaries and wildlife parks in Sri Lanka. With this vision-based detection system at its core, a prototype unit of an early warning system was designed and tested. This weatherised and waterproofed unit consists of a Reolink security camera which provides a wide field of view and range, an Nvidia Jetson Xavier computing unit, a rechargeable battery, and a solar panel for self-sufficient functioning. The prototype unit was designed to be a low-cost, low-power and small footprint device that can be mounted on infrastructures such as poles or trees. If an elephant is detected, an early warning message is communicated to the train driver using the GSM network. A mobile app for this purpose was also designed to ensure that the warning is clearly communicated. A centralized control station manages and communicates all information through the train station network to ensure coordination among important stakeholders. Initial results indicate that detection accuracy is sufficient under varying lighting situations, provided comprehensive training datasets that represent a wide range of challenging conditions are available. The overall hardware prototype was shown to be robust and reliable. We envision a network of such units may help contribute to reducing the problem of elephant-train collisions and has the potential to act as an important surveillance mechanism in dealing with the broader issue of human-elephant conflicts.Keywords: computer vision, deep learning, human-elephant conflict, wildlife early warning technology
Procedia PDF Downloads 22647 Sickle Cell Disease: Review of Managements in Pregnancy and the Outcome in Ampang Hospital, Selangor
Authors: Z. Nurzaireena, K. Azalea, T. Azirawaty, S. Jameela, G. Muralitharan
Abstract:
The aim of this study is the review of the management practices of sickle cell disease patients during pregnancy, as well as the maternal and neonatal outcome at Ampang Hospital, Selangor. The study consisted of a review of pregnant patients with sickle cell disease under follow up at the Hematology Clinic, Ampang Hospital over the last seven years to assess their management and maternal-fetal outcome. The results of the review show that Ampang Hospital is considered the public hematology centre for sickle cell disease and had successfully managed three pregnancies throughout the last seven years. Patients’ presentations, managements and maternal-fetal outcome were compared and reviewed for academic improvements. All three patients were seen very early in their pregnancy and had been given a regime of folic acid, antibiotics and thrombo-prophylactic drugs. Close monitoring of maternal and fetal well being was done by the hematologists and obstetricians. Among the patients, there were multiple admissions during the pregnancy for either a painful sickle cell bone crisis, haemolysis following an infection and anemia requiring phenotype- matched blood and exchange transfusions. Broad spectrum antibiotics coverage during and infection, hydration, pain management and venous-thrombolism prophylaxis were mandatory. The pregnancies managed to reach near term in the third trimester but all required emergency caesarean section for obstetric indications. All pregnancies resulted in live births with good fetal outcome. During post partum all were nursed closely in the high dependency units for further complications and were discharged well. Post partum follow up and contraception counseling was comprehensively given for future pregnancies. Sickle cell disease is uncommonly seen in the East, especially in the South East Asian region, yet more cases are seen in the current decade due to improved medical expertise and advance medical laboratory technologies. Pregnancy itself is a risk factor for sickle cell patients as increased thrombosis event and risk of infections can lead to multiple crisis, haemolysis, anemia and vaso-occlusive complications including eclampsia, cerebrovasular accidents and acute bone pain. Patients mostly require multiple blood product transfusions thus phenotype-matched blood is required to reduce the risk of alloimmunozation. Emphasizing the risks and complications in preconception counseling and establishing an ultimate pregnancy plan would probably reduce the risk of morbidity and mortality to the mother and unborn child. Early management for risk of infection, thromboembolic events and adequate hydration is mandatory. A holistic approach involving multidisciplinary team care between the hematologist, obstetricians, anesthetist, neonatologist and close nursing care for both mother and baby would ensure the best outcome. In conclusion, sickle cell disease by itself is a high risk medical condition and pregnancy would further amplify the risk. Thus, close monitoring with combine multidisciplinary care, counseling and educating the patients are crucial in achieving the safe outcome.Keywords: anaemia, haemoglobinopathies, pregnancy, sickle cell disease
Procedia PDF Downloads 25846 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 241