Search results for: real property
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6670

Search results for: real property

700 Reassembling a Fragmented Border Landscape at Crossroads: Indigenous Rights, Rural Sustainability, Regional Integration and Post-Colonial Justice in Hong Kong

Authors: Chiu-Yin Leung

Abstract:

This research investigates a complex assemblage among indigenous identities, socio-political organization and national apparatus in the border landscape of post-colonial Hong Kong. This former British colony had designated a transient mode of governance in its New Territories and particularly the northernmost borderland in 1951-2012. With a discriminated system of land provisions for the indigenous villagers, the place has been inherited with distinctive village-based culture, historic monuments and agrarian practices until its sovereignty return into the People’s Republic of China. In its latest development imperatives by the national strategic planning, the frontier area of Hong Kong has been identified as a strategy site for regional economic integration in South China, with cross-border projects of innovation and technology zones, mega-transport infrastructure and inter-jurisdictional arrangement. Contemporary literature theorizes borders as the material and discursive production of territoriality, which manifest in state apparatus and the daily lives of its citizens and condense in the contested articulations of power, security and citizenship. Drawing on the concept of assemblage, this paper attempts to tract how the border regime and infrastructure in Hong Kong as a city are deeply ingrained in the everyday lived spaces of the local communities but also the changing urban and regional strategies across different longitudinal moments. Through an intensive ethnographic fieldwork among the borderland villages since 2008 and the extensive analysis of colonial archives, new development plans and spatial planning frameworks, the author navigates the genealogy of the border landscape in Ta Kwu Ling frontier area and its implications as the milieu for new state space, covering heterogeneous fields particularly in indigenous rights, heritage preservation, rural sustainability and regional economy. Empirical evidence suggests an apparent bias towards indigenous power and colonial representation in classifying landscape values and conserving historical monuments. Squatter and farm tenants are often deprived of property rights, statutory participation and livelihood option in the planning process. The postcolonial bureaucracies have great difficulties in mobilizing resources to catch up with the swift, political-first approach of the mainland counterparts. Meanwhile, the cultural heritage, lineage network and memory landscape are not protected altogether with any holistic view or collaborative effort across the border. The enactment of land resumption and compensation scheme is furthermore disturbed by lineage-based customary law, technocratic bureaucracy, intra-community conflicts and multi-scalar political mobilization. As many traces of colonial misfortune and tyranny have been whitewashed without proper management, the author argues that postcolonial justice is yet reconciled in this fragmented border landscape. The assemblage of border in mainstream representation has tended to oversimplify local struggles as a collective mist and setup a wider production of schizophrenia experiences in the discussion of further economic integration among Hong Kong and other mainland cities in the Pearl River Delta Region. The research is expected to shed new light on the theorizing of border regions and postcolonialism beyond Eurocentric perspectives. In reassembling the borderland experiences with other arrays in state governance, village organization and indigenous identities, the author also suggests an alternative epistemology in reconciling socio-spatial differences and opening up imaginaries for positive interventions.

Keywords: heritage conservation, indigenous communities, post-colonial borderland, regional development, rural sustainability

Procedia PDF Downloads 207
699 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 48
698 Improving the Technology of Assembly by Use of Computer Calculations

Authors: Mariya V. Yanyukina, Michael A. Bolotov

Abstract:

Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.

Keywords: accuracy, assembly, interacting dimension chains, turbine

Procedia PDF Downloads 373
697 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building

Authors: A. Schuchter, M. Promegger

Abstract:

The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.

Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning

Procedia PDF Downloads 120
696 From Achilles to Chris Kyle-Militarized Masculinity and Hollywood in the Post-9/11 Era

Authors: Mary M. Park

Abstract:

Hollywood has had a long and enduring history of showcasing the United States military to civilian audiences, and the portrayals of soldiers in films have had a definite impact on the civilian perception of the US military. The growing gap between the civilian population and the military in the US has led to certain stereotypes of military personnel to proliferate, especially in the area of militarized masculinity, which has often been harmful to the psychological and spiritual wellbeing of military personnel. Examining Hollywood's portrayal of soldiers can serve to enhance our understanding of how civilians may be influenced in their perception of military personnel. Moreover, it can provide clues as to how male military personnel may also be influenced by Hollywood films as they form their own military identity. The post 9/11 era has seen numerous high budget films lionizing a particular type of soldier, the 'warrior-hero', who adheres to a traditional form of hegemonic masculinity and exhibits traits such as physical strength, bravery, stoicism, and an eagerness to fight. This paper examines how the portrayal of the 'warrior-hero' perpetuates a negative stereotype that soldiers are a blend of superheroes and emotionless robots and, therefore, inherently different from civilians. This paper examines the portrayal of militarized masculinity in three of the most successful war films of the post-9/11 era; Black Hawk Down (2001), The Hurt Locker (2008), and American Sniper (2014). The characters and experiences of the soldiers depicted in these films are contrasted with the lived experiences of soldiers during the Iraq and Afghanistan wars. Further, there is an analysis of popular films depicting ancient warriors, such as Troy (2004) and 300 (2007), which were released during the early years of the War on Terror. This paper draws on the concept of hegemonic militarised masculinity by leading scholars and feminist international relations theories on militarized masculinity. This paper uses veteran testimonies collected from a range of public sources, as well as previous studies on the link between traditional masculinity and war-related mental illness. This paper concludes that the seemingly exclusive portrayal of soldiers as 'warrior-heroes' in films in the post-9/11 era is misleading and damaging to civil-military relations and that the reality of the majority of soldiers' experiences is neglected in Hollywood films. As civilians often believe they are being shown true depictions of the US military in Hollywood films, especially in films that portray real events, it is important to find the differences between the idealized fictional 'warrior-heroes' and the reality of the soldiers on the ground in the War on Terror.

Keywords: civil-military relations, gender studies, militarized masculinity, social pyschology

Procedia PDF Downloads 123
695 A Study of Life Expectancy in an Urban Set up of North-Eastern India under Dynamic Consideration Incorporating Cause Specific Mortality

Authors: Mompi Sharma, Labananda Choudhury, Anjana M. Saikia

Abstract:

Background: The period life table is entirely based on the assumption that the mortality patterns of the population existing in the given period will persist throughout their lives. However, it has been observed that the mortality rate continues to decline. As such, if the rates of change of probabilities of death are considered in a life table then we get a dynamic life table. Although, mortality has been declining in all parts of India, one may be interested to know whether these declines had appeared more in an urban area of underdeveloped regions like North-Eastern India. So, attempt has been made to know the mortality pattern and the life expectancy under dynamic scenario in Guwahati, the biggest city of North Eastern India. Further, if the probabilities of death changes then there is a possibility that its different constituent probabilities will also change. Since cardiovascular disease (CVD) is the leading cause of death in Guwahati. Therefore, an attempt has also been made to formulate dynamic cause specific death ratio and probabilities of death due to CVD. Objectives: To construct dynamic life table for Guwahati for the year 2011 based on the rates of change of probabilities of death over the previous 10 and 25 years (i.e.,2001 and 1986) and to compute corresponding dynamic cause specific death ratio and probabilities of death due to CVD. Methodology and Data: The study uses the method proposed by Denton and Spencer (2011) to construct dynamic life table for Guwahati. So, the data from the Office of the Birth and Death, Guwahati Municipal Corporation for the years 1986, 2001 and 2011 are taken. The population based data are taken from 2001 and 2011 census (India). However, the population data for 1986 has been estimated. Also, the cause of death ratio and probabilities of death due to CVD are computed for the aforementioned years and then extended to dynamic set up for the year 2011 by considering the rates of change of those probabilities over the previous 10 and 25 years. Findings: The dynamic life expectancy at birth (LEB) for Guwahati is found to be higher than the corresponding values in the period table by 3.28 (5.65) years for males and 8.30 (6.37) years for females during the period of 10 (25) years. The life expectancies under dynamic consideration in all the other age groups are also seen higher than the usual life expectancies, which may be possible due to gradual decline in probabilities of death since 1986-2011. Further, a continuous decline has also been observed in death ratio due to CVD along with cause specific probabilities of death for both sexes. As a consequence, dynamic cause of death probability due to CVD is found to be less in comparison to usual procedure. Conclusion: Since incorporation of changing mortality rates in period life table for Guwahati resulted in higher life expectancies and lower probabilities of death due to CVD, this would possibly bring out the real situation of deaths prevailing in the city.

Keywords: cause specific death ratio, cause specific probabilities of death, dynamic, life expectancy

Procedia PDF Downloads 232
694 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 272
693 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 204
692 Regular Laboratory Based Neonatal Simulation Program Increases Senior Clinicians’ Knowledge, Skills and Confidence Caring for Sick Neonates

Authors: Madeline Tagg, Choihoong Mui, Elizabeth Lek, Jide Menakaya

Abstract:

Introduction: Simulation technology is used by neonatal teams to learn and refresh skills and gain the knowledge and confidence to care for sick neonates. In-situ simulation is considered superior to laboratory-based programmes as it closely mirrors real life situations. This study reports our experience of running regular laboratory-based simulation sessions for senior clinicians and nurses and its impact on their knowledge, skills and confidence. Methods: A before and after questionnaire survey was carried out on senior clinicians and nurses that attended a scheduled laboratory-based simulation session. Participants were asked to document their expectations before a 3-hour monthly laboratory programme started and invited to feedback their reflections at the end of the session. The session included discussion of relevant clinical guidelines, immersion in a scenario and video led debrief. The results of the survey were analysed in three skills based categories - improved, no change or a worsened experience. Results: 45 questionnaires were completed and analysed. Of these 25 (55%) were completed by consultants seven and six by nurses and trainee doctors respectively, and seven respondents were unknown. 40 (88%) rated the session overall and guideline review as good/excellent, 39 respondents (86%) rated the scenario session good/excellent and 40/45 fed back a good/excellent debrief session. 33 (73%) respondents completed the before and after questionnaire. 21/33 (63%) reflected an improved knowledge, skill or confidence in caring for sick new-bon babies, eight respondents reported no change and four fed back a worse experience after the session. Discussion: Most respondents found the laboratory based structured simulation session beneficial for their professional development. They valued equally the whole content of the programme such as guideline review and equipment training as well as the simulation and debrief sessions. Two out three participants stated their knowledge of caring for sick new-born babies had been transformed positively by the session. Sessions where simulation equipment failed or relevant staff were absent contributed to a poor educational experience. Summary: A regular structured laboratory-based simulation programme with a rich content is a credible educational resource for improving the knowledge, skills and confidence of senior clinicians caring for sick new born babies.

Keywords: knowledge, laboratory based, neonates, simulation

Procedia PDF Downloads 121
691 Impact of Alternative Fuel Feeding on Fuel Cell Performance and Durability

Authors: S. Rodosik, J. P. Poirot-Crouvezier, Y. Bultel

Abstract:

With the expansion of the hydrogen economy, Proton Exchange Membrane Fuel Cell (PEMFC) systems are often presented as promising energy converters suitable for transport applications. However, reaching a durability of 5000 h recommended by the U.S. Department of Energy and decreasing system cost are still major hurdles to their development. In order to increase the system efficiency and simplify the system without affecting the fuel cell lifetime, an architecture called alternative fuel feeding has been developed. It consists in a fuel cell stack divided into two parts, alternatively fed, implemented on a 5-kW system for real scale testing. The operation strategy can be considered close to Dead End Anode (DEA) with specific modifications to avoid water and nitrogen accumulation in the cells. The two half-stacks are connected in series to enable each stack to be alternatively fed. Water and nitrogen accumulated can be shifted from one half-stack to the other one according to the alternative feeding frequency. Thanks to the homogenization of water vapor along the stack, water management was improved. The operating conditions obtained at system scale are close to recirculation without the need of a pump or an ejector. In a first part, a performance comparison with the DEA strategy has been performed. At high temperature and low pressure (80°C, 1.2 bar), performance of alternative fuel feeding was higher, and the system efficiency increased. In a second part, in order to highlight the benefits of the architecture on the fuel cell lifetime, two durability tests, lasting up to 1000h, have been conducted. A test on the 5-kW system has been compared to a reference test performed on a test bench with a shorter stack, conducted with well-controlled operating parameters and flow-through hydrogen strategy. The durability test is based upon the Fuel Cell Dynamic Load Cycle (FC-DLC) protocol but adapted to the system limitations: without OCV steps and a maximum current density of 0.4 A/cm². In situ local measurements with a segmented S++® plate performed all along the tests, showed a more homogeneous distribution of the current density with alternative fuel feeding than in flow-through strategy. Tests performed in this work enabled the understanding of this architecture advantages and drawbacks. Alternative fuel feeding architecture appeared to be a promising solution to ensure the humidification function at the anode side with a simplified fuel cell system.

Keywords: automotive conditions, durability, fuel cell system, proton exchange membrane fuel cell, stack architecture

Procedia PDF Downloads 142
690 Teaching Material, Books, Publications versus the Practice: Myths and Truths about Installation and Use of Downhole Safety Valve

Authors: Robson da Cunha Santos, Caio Cezar R. Bonifacio, Diego Mureb Quesada, Gerson Gomes Cunha

Abstract:

The paper is related to the safety of oil wells and environmental preservation on the planet, because they require great attention and commitment from oil companies and people who work with these equipments. This must occur from drilling the well until it is abandoned in order to safeguard the environment and prevent possible damage. The project had as main objective the constitution resulting from comparatives made among books, articles and publications with information gathered in technical visits to operational bases of Petrobras. After the visits, the information from methods of utilization and present managements, which were not available before, became available to the general audience. As a result, it is observed a huge flux of incorrect and out-of-date information that comprehends not only bibliographic archives, but also academic resources and materials. During the gathering of more in-depth information on the manufacturing, assembling, and use aspects of DHSVs, several issues that were previously known as correct, customary issues were discovered to be uncertain and outdated. Information of great importance resulted in affirmations about subjects as the depth of the valve installation that was before installed to 30 meters from the seabed (mud line). Despite this, the installation should vary in conformity to the ideal depth to escape from area with the biggest tendency to hydrates formation according to the temperature and pressure. Regarding to valves with nitrogen chamber, in accordance with books, they have their utilization linked to water line ≥ 700 meters, but in Brazilian exploratory fields, their use occurs from 600 meters of water line. The valves used in Brazilian fields are able to be inserted to the production column and self-equalizing, but the use of screwed valve in the column of production and equalizing is predominant. Although these valves are more expensive to acquire, they are more reliable, efficient, with a bigger shelf life and they do not cause restriction to the fluid flux. It follows that based on researches and theoretical information confronted to usual forms used in fields, the present project is important and relevant. This project will be used as source of actualization and information equalization that connects academic environment and real situations in exploratory situations and also taking into consideration the enrichment of precise and easy to understand information to future researches and academic upgrading.

Keywords: down hole safety valve, security devices, installation, oil-wells

Procedia PDF Downloads 269
689 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air

Authors: Tobias Schnabel

Abstract:

Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.

Keywords: naphthalene, titandioxide, indoor air, photocatalysis

Procedia PDF Downloads 143
688 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories

Authors: Mojtaba Taheri, Saied Reza Ameli

Abstract:

In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.

Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty

Procedia PDF Downloads 71
687 Green Ports: Innovation Adopters or Innovation Developers

Authors: Marco Ferretti, Marcello Risitano, Maria Cristina Pietronudo, Lina Ozturk

Abstract:

A green port is the result of a sustainable long-term strategy adopted by an entire port infrastructure, therefore by the set of actors involved in port activities. The strategy aims to realise the development of sustainable port infrastructure focused on the reduction of negative environmental impacts without jeopardising economic growth. Green technology represents the core tool to implement sustainable solutions, however, they are not a magic bullet. Ports have always been integrated in the local territory affecting the environment in which they operate, therefore, the sustainable strategy should fit with the entire local systems. Therefore, adopting a sustainable strategy means to know how to involve and engage a wide stakeholders’ network (industries, production, markets, citizens, and public authority). The existing research on the topic has not well integrated this perspective with those of sustainability. Research on green ports have mixed the sustainability aspects with those on the maritime industry, neglecting dynamics that lead to the development of the green port phenomenon. We propose an analysis of green ports adopting the lens of ecosystem studies in the field of management. The ecosystem approach provides a way to model relations that enable green solutions and green practices in a port ecosystem. However, due to the local dimension of a port and the port trend on innovation, i.e., sustainable innovation, we draw to a specific concept of ecosystem, those on local innovation systems. More precisely, we explore if a green port is a local innovation system engaged in developing sustainable innovation with a large impact on the territory or merely an innovation adopter. To address this issue, we adopt a comparative case study selecting two innovative ports in Europe: Rotterdam and Genova. The case study is a research method focused on understanding the dynamics in a specific situation and can be used to provide a description of real circumstances. Preliminary results show two different approaches in supporting sustainable innovation: one represented by Rotterdam, a pioneer in competitiveness and sustainability, and the second one represented by Genoa, an example of technology adopter. The paper intends to provide a better understanding of how sustainable innovations are developed and in which manner a network of port and local stakeholder support this process. Furthermore, it proposes a taxonomy of green ports as developers and adopters of sustainable innovation, suggesting also best practices to model relationships that enable the port ecosystem in applying a sustainable strategy.

Keywords: green port, innovation, sustainability, local innovation systems

Procedia PDF Downloads 120
686 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation

Authors: Mohammad Abu-Shaira, Weishi Shi

Abstract:

Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.

Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression

Procedia PDF Downloads 11
685 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 63
684 Attitudes Towards the Supernatural in Benjamin Britten’s The Turn of the Screw

Authors: Yaou Zhang

Abstract:

Background: Relatively little scholarly attention has been paid to the production of Benjamin Britten’s chamber opera The Turn of the Screw. As one of Britten’s most remarkable operas. The story of the libretto was from Henry James’s novella of the same name. The novella was created in 1898 and one of the primary questions addressed to people in the story is “how real the ghosts are,” which leads the story to a huge ambiguity in readers’ minds. Aims: This research focuses on the experience of seeing the opera on stage over several decades. This study of opera productions over time not only provides insight into how stage performances can alter audience members' perceptions of the opera in the present but also reveals a landscape of shifting aesthetics and receptions. Methods: To examine the hypotheses in interpretation and reception, the qualitative analysis is used to examine the figures of ghosts in different productions across the time from 1954 to 2021 in the UK: by accessing recordings, newspapers, and reviews for the productions that are sourced from online and physical archives. For instance, the field research is conducted on the topic by arranging interviews with the creative team and visiting Opera North in Leeds and Britten-Pears Foundation. The collected data reveals the “hidden identity” in creative teams’ interpretations, social preferences, and rediscover that have previously remained unseen. Results: This research presents an angle of Britten’s Screw by using the third position; it shows how the attention moved from the stage of “do the ghosts really exist” to “traumatised children.” Discussion: Critics and audiences have debated whether the governess hallucinates the ghosts in the opera for decades. While, in recent years, directors of new productions have given themselves the opportunity to go deeper into Britten's musical structure and offer the opera more space to be interpreted, rather than debating if "ghosts actually exist" or "the psychological problems of the governess." One can consider and reflect that the questionable actions of the children are because they are suffering from trauma, whether the trauma comes from the ghosts, the hallucinating governess, or some prior experiences: various interpretations cause one result that children are the recipients of trauma. Arguably, the role of the supernatural is neither simply one of the elements of a ghost story nor simply one of the parts of the ambiguity between the supernatural and the hallucination of the governess; rather, the ghosts and the hallucinating governess can exist at the same time - the combination of the supernatural’s and the governess’s behaviours on stage generates a sharper and more serious angle that draws our attention to the traumatized children.

Keywords: benjamin britten, chamber opera, production, reception, staging, the turn of the screw

Procedia PDF Downloads 108
683 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 452
682 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 122
681 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 605
680 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 119
679 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures

Authors: Sunaitan Al Mutairi

Abstract:

A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.

Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays

Procedia PDF Downloads 318
678 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 490
677 Application of Forensic Entomology to Estimate the Post Mortem Interval

Authors: Meriem Taleb, Ghania Tail, Fatma Zohra Kara, Brahim Djedouani, T. Moussa

Abstract:

Forensic entomology has grown immensely as a discipline in the past thirty years. The main purpose of forensic entomology is to establish the post mortem interval or PMI. Three days after the death, insect evidence is often the most accurate and sometimes the only method of determining elapsed time since death. This work presents the estimation of the PMI in an experiment to test the reliability of the accumulated degree days (ADD) method and the application of this method in a real case. The study was conducted at the Laboratory of Entomology at the National Institute for Criminalistics and Criminology of the National Gendarmerie, Algeria. The domestic rabbit Oryctolagus cuniculus L. was selected as the animal model. On 08th July 2012, the animal was killed. Larvae were collected and raised to adulthood. Estimation of oviposition time was calculated by summing up average daily temperatures minus minimum development temperature (also specific to each species). When the sum is reached, it corresponds to the oviposition day. Weather data were obtained from the nearest meteorological station. After rearing was accomplished, three species emerged: Lucilia sericata, Chrysomya albiceps, and Sarcophaga africa. For Chrysomya albiceps species, a cumulation of 186°C is necessary. The emergence of adults occured on 22nd July 2012. A value of 193.4°C is reached on 9th August 2012. Lucilia sericata species require a cumulation of 207°C. The emergence of adults occurred on 23rd, July 2012. A value of 211.35°C is reached on 9th August 2012. We should also consider that oviposition may occur more than 12 hours after death. Thus, the obtained PMI is in agreement with the actual time of death. We illustrate the use of this method during the investigation of a case of a decaying human body found on 03rd March 2015 in Bechar, South West of Algerian desert. Maggots were collected and sent to the Laboratory of Entomology. Lucilia sericata adults were identified on 24th March 2015 after emergence. A sum of 211.6°C was reached on 1st March 2015 which corresponds to the estimated day of oviposition. Therefore, the estimated date of death is 1st March 2015 ± 24 hours. The estimated PMI by accumulated degree days (ADD) method seems to be very precise. Entomological evidence should always be used in homicide investigations when the time of death cannot be determined by other methods.

Keywords: forensic entomology, accumulated degree days, postmortem interval, diptera, Algeria

Procedia PDF Downloads 294
676 Simulation Study on Polymer Flooding with Thermal Degradation in Elevated-Temperature Reservoirs

Authors: Lin Zhao, Hanqiao Jiang, Junjian Li

Abstract:

Polymers injected into elevated-temperature reservoirs inevitably suffer from thermal degradation, resulting in severe viscosity loss and poor flooding performance. However, for polymer flooding in such reservoirs, present simulators fail to provide accurate results for lack of description on thermal degradation. In light of this, the objectives of this paper are to provide a simulation model for polymer flooding with thermal degradation and study the effect of thermal degradation on polymer flooding in elevated-temperature reservoirs. Firstly, a thermal degradation experiment was conducted to obtain the degradation law of polymer concentration and viscosity. Different types of polymers degraded in the Thermo tank with elevated temperatures. Afterward, based on the obtained law, a streamline-assistant model was proposed to simulate the degradation process under in-situ flow conditions. Model validation was performed with field data from a well group of an offshore oilfield. Finally, the effect of thermal degradation on polymer flooding was studied using the proposed model. Experimental results showed that the polymer concentration remained unchanged, while the viscosity degraded exponentially with time after degradation. The polymer viscosity was functionally dependent on the polymer degradation time (PDT), which represented the elapsed time started from the polymer particle injection. Tracing the real flow path of polymer particle was required. Therefore, the presented simulation model was streamline-assistant. Equation of PDT vs. time of flight (TOF) along streamline was built by the law of polymer particle transport. Based on the field polymer sample and dynamic data, the new model proved its accuracy. Study of degradation effect on polymer flooding indicated: (1) the viscosity loss increased with TOF exponentially in the main body of polymer-slug and remained constant in the slug front; (2) the responding time of polymer flooding was delayed, but the effective time was prolonged; (3) the breakthrough of subsequent water was eased; (4) the capacity of polymer adjusting injection profile was diminished; (5) the incremental recovery was reduced significantly. In general, the effect of thermal degradation on polymer flooding performance was rather negative. This paper provides a more comprehensive insight into polymer thermal degradation in both the physical process and field application. The proposed simulation model offers an effective means for simulating the polymer flooding process with thermal degradation. The negative effect of thermal degradation suggests that the polymer thermal stability should be given full consideration when designing polymer flooding project in elevated-temperature reservoirs.

Keywords: polymer flooding, elevated-temperature reservoir, thermal degradation, numerical simulation

Procedia PDF Downloads 143
675 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 29
674 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 196
673 Study of the Combinatorial Impact of Substrate Properties on Mesenchymal Stem Cell Migration Using Microfluidics

Authors: Nishanth Venugopal Menon, Chuah Yon Jin, Samantha Phey, Wu Yingnan, Zhang Ying, Vincent Chan, Kang Yuejun

Abstract:

Cell Migration is a vital phenomenon that the cells undergo in various physiological processes like wound healing, disease progression, embryogenesis, etc. Cell migration depends primarily on the chemical and physical cues available in the cellular environment. The chemical cue involves the chemokines secreted and gradients generated in the environment while physical cues indicate the impact of matrix properties like nanotopography and stiffness on the cells. Mesenchymal Stem Cells (MSCs) have been shown to have a role wound healing in vivo and its migration to the site of the wound has been shown to have a therapeutic effect. In the field of stem cell based tissue regeneration of bones and cartilage, one approach has been to introduce scaffold laden with MSCs into the site of injury to enable tissue regeneration. In this work, we have studied the combinatorial impact of the substrate physical properties on MSC migration. A microfluidic in vitro model was created to perform the migration studies. The microfluidic model used is a three compartment device consisting of two cell seeding compartments and one migration compartment. Four different PDMS substrates with varying substrate roughness, stiffness and hydrophobicity were created. Its surface roughness and stiffness was measured using Atomic Force Microscopy (AFM) while its hydrphobicity was measured from the water contact angle using an optical tensiometer. These PDMS substrates are sealed to the microfluidic chip following which the MSCs are seeded and the cell migration is studied over the period of a week. Cell migration was quantified using fluorescence imaging of the cytoskeleton (F-actin) to find out the area covered by the cells inside the migration compartment. The impact of adhesion proteins on cell migration was also quantified using a real-time polymerase chain reaction (qRT PCR). These results suggested that the optimal substrate for cell migration would be one with an intermediate level of roughness, stiffness and hydrophobicity. A higher or lower value of these properties affected cell migration negatively. These observations have helped us in understanding that different substrate properties need to be considered in tandem, especially while designing scaffolds for tissue regeneration as cell migration is normally impacted by the combinatorial impact of the matrix. These observations may lead us to scaffold optimization in future tissue regeneration applications.

Keywords: cell migration, microfluidics, in vitro model, stem cell migration, scaffold, substrate properties

Procedia PDF Downloads 557
672 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools

Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus

Abstract:

Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.

Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects

Procedia PDF Downloads 265
671 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection

Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng

Abstract:

Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.

Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric

Procedia PDF Downloads 448