Search results for: window display
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1047

Search results for: window display

297 Hazardous Effects of Metal Ions on the Thermal Stability of Hydroxylammonium Nitrate

Authors: Shweta Hoyani, Charlie Oommen

Abstract:

HAN-based liquid propellants are perceived as potential substitute for hydrazine in space propulsion. Storage stability for long service life in orbit is one of the key concerns for HAN-based monopropellants because of its reactivity with metallic and non-metallic impurities which could entrain from the surface of fuel tanks and the tubes. The end result of this reactivity directly affects the handling, performance and storability of the liquid propellant. Gaseous products resulting from the decomposition of the propellant can lead to deleterious pressure build up in storage vessels. The partial loss of an energetic component can change the ignition and the combustion behavior and alter the performance of the thruster. The effect of largely plausible metals- iron, copper, chromium, nickel, manganese, molybdenum, zinc, titanium and cadmium on the thermal decomposition mechanism of HAN has been investigated in this context. Studies involving different concentrations of metal ions and HAN at different preheat temperatures have been carried out. Effect of metal ions on the decomposition behavior of HAN has been studied earlier in the context of use of HAN as gun propellant. However the current investigation pertains to the decomposition mechanism of HAN in the context of use of HAN as monopropellant for space propulsion. Decomposition onset temperature, rate of weight loss, heat of reaction were studied using DTA- TGA and total pressure rise and rate of pressure rise during decomposition were evaluated using an in-house built constant volume batch reactor. Besides, reaction mechanism and product profile were studied using TGA-FTIR setup. Iron and copper displayed the maximum reaction. Initial results indicate that iron and copper shows sensitizing effect at concentrations as low as 50 ppm with 60% HAN solution at 80°C. On the other hand 50 ppm zinc does not display any effect on the thermal decomposition of even 90% HAN solution at 80°C.

Keywords: hydroxylammonium nitrate, monopropellant, reaction mechanism, thermal stability

Procedia PDF Downloads 399
296 Multimedia Container for Autonomous Car

Authors: Janusz Bobulski, Mariusz Kubanek

Abstract:

The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.

Keywords: an autonomous car, image processing, lidar, obstacle detection

Procedia PDF Downloads 199
295 Construction Strategy of Urban Public Space in Driverless Era

Authors: Yang Ye, Hongfei Qiu, Yaqi Li

Abstract:

The planning and construction of traditional cities are oriented by cars, which leads to the problems of insufficient urban public space, fragmentation, and low utilization efficiency. With the development of driverless technology, the urban structure will change from the traditional single-core grid structure to the multi-core model. In terms of traffic organization, with the release of land for traffic facilities, public space will become more continuous and integrated with traffic space. In the context of driverless technology, urban public reconstruction is characterized by modularization and high efficiency, and its planning and layout features accord with points (service facilities), lines (smart lines), surfaces (activity centers). The public space of driverless urban roads will provide diversified urban public facilities and services. The intensive urban layout makes the commercial public space realize the functions of central activities and style display, respectively, in the interior (building atrium) and the exterior (building periphery). In addition to recreation function, urban green space can also utilize underground parking space to realize efficient dispatching of shared cars. The roads inside the residential community will be integrated into the urban landscape, providing conditions for the community public activity space with changing time sequence and improving the efficiency of space utilization. The intervention of driverless technology will change the thinking of traditional urban construction and turn it into a human-oriented one. As a result, urban public space will be richer, more connected, more efficient, and the urban space justice will be optimized. By summarizing the frontier research, this paper discusses the impact of unmanned driving on cities, especially urban public space, which is beneficial for landscape architects to cope with the future development and changes of the industry and provides a reference for the related research and practice.

Keywords: driverless, urban public space, construction strategy, urban design

Procedia PDF Downloads 93
294 Identification and Origins of Multiple Personality: A Criterion from Wiggins

Authors: Brittany L. Kang

Abstract:

One familiar theory of the origin of multiple personalities focuses on how symptoms of trauma or abuse are central causes, as seen in paradigmatic examples of the condition. The theory states that multiple personalities constitute a congenital condition, as babies all exhibit multiplicity, and that generally alters only remain separated due to trauma. In more typical cases, the alters converge and become a single identity; only in cases of trauma, according to this account, do the alters remain separated. This theory is misleading in many aspects, the most prominent being that not all multiple personality patients are victims of child abuse or trauma, nor are all cases of multiple personality observed in early childhood. The use of this criterion also causes clinical problems, including an inability to identify multiple personalities through the variety of symptoms and traits seen across observed cases. These issues present a need for revision in the currently applied criterion in order to separate the notion of child abuse and to be able to better understand the origins of multiple personalities itself. Identifying multiplicity through the application of identity theories will improve the current criterion, offering a bridge between identifying existing cases and understanding their origins. We begin by applying arguments from Wiggins, who held that each personality within a multiple was not a whole individual, but rather characters who switch off. Wiggins’ theory is supported by observational evidence of how such characters are differentiated. Alters of older ages are seen to require different prescription lens, in addition to having different handwriting. The alters may also display drastically varying styles of clothing, preferences in food, their gender, sexuality, religious beliefs and more. The definitions of terms such as 'personality' or 'persons' also become more distinguished, leading to greater understanding of who is exactly able to be classified as a patient of multiple personalities. While a more common meaning of personality is a designation of specific characteristics which account for the entirety of a person, this paper argues from Wiggins’ theory that each 'personality' is in fact only partial. Clarification of the concept in question will allow for more successful future clinical applications.

Keywords: identification, multiple personalities, origin, Wiggins' theory

Procedia PDF Downloads 222
293 Greek Teachers' Understandings of Typical Language Development and of Language Difficulties in Primary School Children and Their Approaches to Language Teaching

Authors: Konstantina Georgali

Abstract:

The present study explores Greek teachers’ understandings of typical language development and of language difficulties. Its core aim was to highlight that teachers need to have a thorough understanding of educational linguistics, that is of how language figures in education. They should also be aware of how language should be taught so as to promote language development for all students while at the same time support the needs of children with language difficulties in an inclusive ethos. The study, thus argued that language can be a dynamic learning mechanism in the minds of all children and a powerful teaching tool in the hands of teachers and provided current research evidence to show that structural and morphological particularities of native languages- in this case, of the Greek language- can be used by teachers to enhance children’s understanding of language and simultaneously improve oral language skills for children with typical language development and for those with language difficulties. The research was based on a Sequential Exploratory Mixed Methods Design deployed in three consecutive and integrative phases. The first phase involved 18 exploratory interviews with teachers. Its findings informed the second phase involving a questionnaire survey with 119 respondents. Contradictory questionnaire results were further investigated in a third phase employing a formal testing procedure with 60 children attending Y1, Y2 and Y3 of primary school (a research group of 30 language impaired children and a comparison group of 30 children with typical language development, both identified by their class teachers). Results showed both strengths and weaknesses in teachers’ awareness of educational linguistics and of language difficulties. They also provided a different perspective of children’s language needs and of language teaching approaches that reflected current advances and conceptualizations of language problems and opened a new window on how best they can be met in an inclusive ethos. However, teachers barely used teaching approaches that could capitalize on the particularities of the Greek language to improve language skills for all students in class. Although they seemed to realize the importance of oral language skills and their knowledge base on language related issues was adequate, their practices indicated that they did not see language as a dynamic teaching and learning mechanism that can promote children’s language development and in tandem, improve academic attainment. Important educational implications arose and clear indications of the generalization of findings beyond the Greek educational context.

Keywords: educational linguistics, inclusive ethos, language difficulties, typical language development

Procedia PDF Downloads 365
292 Lies and Pretended Fairness of Police Officers in Sharing

Authors: Eitan Elaad

Abstract:

The current study aimed to examine lying and pretended fairness by police personnel in sharing situations. Forty Israeli police officers and 40 laypeople from the community, all males, self-assessed their lie-telling ability, rated the frequency of their lies, evaluated the acceptability of lying, and indicated using rational and intuitive thinking while lying. Next, according to the ultimatum game procedure, participants were asked to share 100 points with an imagined target, either a male policeman or a male non-policeman. Participants allocated points to the target person bearing in mind that the other person must accept or reject their offer. Participants' goal was to retain as many points as possible, and to this end, they could tell the target person that fewer than 100 points were available for distribution. We defined concealment or lying as the difference between the available 100 points and the sum of points designated for sharing. Results indicated that police officers lied less to their fellow police targets than non-police targets, whereas laypeople lied less to non-police targets than imagined police targets. The ratio between the points offered to the imagined target person and the points endowed by the participant as available for sharing defined pretended fairness.Enhanced pretended fairness indicates higher motivation to display fair sharing even if the fair sharing is fictitious. Police officers presented higher pretended fairness to police targets than laypeople, whereas laypeople set off more fairness to non-police targets than police officers. We discussed the results concerning occupation solidarity and loyalty among police personnel. Specifically, police work involves uncertainty, danger and risk, coercive authority, and the use of force, which isolates the police from the community and dictates strong bonds of solidarity between police personnel. No wonder police officers shared more points (lied less) to fellow police targets than non-police targets. On the other hand, police legitimacy or the belief that the police are acting honestly in the best interest of the citizens constitutes citizens' attitudes toward the police. The relatively low number of points shared for distribution by laypeople to police targets indicates difficulties with the legitimacy of the Israeli police.

Keywords: lying, fairness, police solidarity, police legitimacy, sharing, ultimatum game

Procedia PDF Downloads 100
291 Using of the Fractal Dimensions for the Analysis of Hyperkinetic Movements in the Parkinson's Disease

Authors: Sadegh Marzban, Mohamad Sobhan Sheikh Andalibi, Farnaz Ghassemi, Farzad Towhidkhah

Abstract:

Parkinson's disease (PD), which is characterized by the tremor at rest, rigidity, akinesia or bradykinesia and postural instability, affects the quality of life of involved individuals. The concept of a fractal is most often associated with irregular geometric objects that display self-similarity. Fractal dimension (FD) can be used to quantify the complexity and the self-similarity of an object such as tremor. In this work, we are aimed to propose a new method for evaluating hyperkinetic movements such as tremor, by using the FD and other correlated parameters in patients who are suffered from PD. In this study, we used 'the tremor data of Physionet'. The database consists of fourteen participants, diagnosed with PD including six patients with high amplitude tremor and eight patients with low amplitude. We tried to extract features from data, which can distinguish between patients before and after medication. We have selected fractal dimensions, including correlation dimension, box dimension, and information dimension. Lilliefors test has been used for normality test. Paired t-test or Wilcoxon signed rank test were also done to find differences between patients before and after medication, depending on whether the normality is detected or not. In addition, two-way ANOVA was used to investigate the possible association between the therapeutic effects and features extracted from the tremor. Just one of the extracted features showed significant differences between patients before and after medication. According to the results, correlation dimension was significantly different before and after the patient's medication (p=0.009). Also, two-way ANOVA demonstrates significant differences just in medication effect (p=0.033), and no significant differences were found between subject's differences (p=0.34) and interaction (p=0.97). The most striking result emerged from the data is that correlation dimension could quantify medication treatment based on tremor. This study has provided a technique to evaluate a non-linear measure for quantifying medication, nominally the correlation dimension. Furthermore, this study supports the idea that fractal dimension analysis yields additional information compared with conventional spectral measures in the detection of poor prognosis patients.

Keywords: correlation dimension, non-linear measure, Parkinson’s disease, tremor

Procedia PDF Downloads 224
290 DNA Fingerprinting of Some Major Genera of Subterranean Termites (Isoptera) (Anacanthotermes, Psammotermes and Microtermes) from Western Saudi Arabia

Authors: AbdelRahman A. Faragalla, Mohamed H. Alqhtani, Mohamed M. M.Ahmed

Abstract:

Saudi Arabia has currently been beset by a barrage of bizarre assemblages of subterranean termite fauna, inflicting heavy catastrophic havocs on human valued properties in various homes, storage facilities, warehouses, agricultural and horticultural crops including okra, sweet pepper, tomatoes, sorghum, date palm trees, citruses and many forest domains and green lush desert oases. The most pressing urgent priority is to use modern technologies to alleviate the painstaking obstacle of taxonomic identification of these injurious noxious pests that might lead to effective pest control in both infested agricultural commodities and field crops. Our study has indicated the use of DNA fingerprinting technologies, in order to generate basic information of the genetic similarity between 3 predominant families containing the most destructive termite species. The methodologies included extraction and DNA isolation from members of the major families and the use of randomly selected primers and PCR amplifications with the nucleotide sequences. GC content and annealing temperatures for all primers, PCR amplifications and agarose gel electrophoresis were also conducted in addition to the scoring and analysis of Random Amplification Polymorphic DNA-PCR (RAPDs). A phylogenetic analysis for different species using statistical computer program on the basis of RAPD-DNA results, represented as a dendrogram based on the average of band sharing ratio between different species. Our study aims to shed more light on this intriguing subject, which may lead to an expedited display of the kinship and relatedness of species in an ambitious undertaking to arrive at correct taxonomic classification of termite species, discover sibling species, so that a logistic rational pest management strategy could be delineated.

Keywords: DNA fingerprinting, Western Saudi Arabia, DNA primers, RAPD

Procedia PDF Downloads 407
289 Urban Corridor Management Strategy Based on Intelligent Transportation System

Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain

Abstract:

Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.

Keywords: congestion, ITS strategies, mobility, safety

Procedia PDF Downloads 422
288 Using E-learning in a Tertiary Institution during Community Outbreak of COVID-19 in Hong Kong

Authors: Susan Ka Yee Chow

Abstract:

The Coronavirus disease (COVID-19) reached Hong Kong in 2019 resulting in epidemic in late January 2020. Considering the epidemic development, tertiary institutions made announcements that all on-campus classes were suspended since 01/29/2020. In Tung Wah College, e-learning was adopted in all courses for all programmes. For the undergraduate nursing students, the contact hours and curriculum are bounded by the Nursing Council of Hong Kong to ensure core competence after graduation. Unlike the usual e-learning where students are allowed having flexibility of time and place in their learning, real time learning mode using Blackboard was used to mimic the actual classroom learning environment. Students were required to attend classes according to the timetable using online platform. For lectures, voice over PowerPoint file was the initial step for mass lecturing. Real time lecture was then adopted to improve interactions between teacher and students. Post-lecture quizzes were developed to monitor the effectiveness of lecture delivery. The seminars and tutorials were conducted using real time mode where students were separated into small groups with interactive discussions with teacher within the group. Live time demonstrations were conducted during laboratory sessions. All teaching sessions were audio/video recorded for students’ referral. The assessments including seminar presentation and debate were retained. The learning mode creates an atmosphere for students to display the visual, audio and written works in a non-threatening atmosphere. Other students could comment using text or direct voice as they desired. Real time online learning is the pedagogy to replace classroom contacts in the emergent and unforeseeable circumstances. The learning pace and interaction between students and students with teacher are maintained. The learning mode has the advantage of creating an effective and beneficial learning experience.

Keywords: e-learning, nursing curriculum, real time mode, teaching and learning

Procedia PDF Downloads 95
287 Methodological Approach for Historical Building Retrofit Based on Energy and Cost Analysis in the Different Climatic Zones

Authors: Selin Guleroglu, Ilker Kahraman, E. Selahattin Umdu

Abstract:

In today’s world, the building sector has a significant impact on primary energy consumption and CO₂ emissions. While new buildings must have high energy performance as indicated by the Energy Performance Directive in Buildings (EPBD), published by the European Union (EU), the energy performance of the existing buildings must also be enhanced with cost-efficient methods. Turkey has a high historical building density similar to south European countries, and the high energy consumption is the main contributor in the energy consumptioın of Turkey, which is rather higher than European counterparts. Historic buildings spread around Turkey for four main climate zones covering very similar climate characteristics to both the north and south European countries. The case study building is determined as the most common building type in Turkey. This study aims to investigate energy retrofit measures covering but not limited to passive and active measures to improve the energy performance of the historical buildings located in different climatic zones within the limits of preservation of the historical value of the building as a crucial constraint. Passive measures include wall, window, and roof construction elements, and active measures HVAC systems in retrofit scenarios. The proposed methodology can help to reach up to 30% energy saving based on primary energy consumption. DesignBuilder, an energy simulation tool, is used to determine the energy performance of buildings with suggested retrofit measures, and the Net Present Value (NPV) method is used for cost analysis of them. Finally, the most efficient energy retrofit measures for all buildings are determined by analyzing primary energy consumption and the cost performance of them. Results show that heat insulation, glazing type, and HVAC system has an important role in energy saving. Also, it found that these parameters have a different positive or negative effect on building energy consumption in different climate zones. For instance, low e glazing has a positive impact on the energy performance of the building in the first zone, while it has a negative effect on the building in the forth zone. Another important result is applying heat insulation has minimum impact on building energy performance compared to other zones.

Keywords: energy performance, climatic zones, historic building, energy retrofit measures, NPV

Procedia PDF Downloads 141
286 Ageing Gingiva: A New Hope for Autologous Stem Cell Therapy

Authors: Ankush M. Dewle, Suditi Bhattacharya, Prachi R. Abhang, Savita Datar, Ajay J. Jog, Rupesh K. Srivastava, Geetanjali Tomar

Abstract:

Objectives: The aim of this study was to investigate the quality of mesenchymal stem cells (MSCs) obtained from ageing gingival tissues, in order to suggest their potential role in autologous stem cell therapy for old individuals. Methods: MSCs were isolated from gingival tissues of young (18-45 years) and old (above 45 years) donors by enzymatic digestion. MSCs were analysed for cfu-f, surface marker expression by flow-cytometry and multilineage differentiation potential. The angiogenic potential was compared in a chick embryo yolk sac membrane model. The aging and differentiation markers including SA-β-galactosidase and p21 respectively were analysed by staining and flow-cytometry analysis. Additionally, osteogenic markers such as glucocorticoid receptor (GR), vitamin D receptor (VDR) were measured by flow-cytometry and RT-qPCR was performed for quantification of osteogenic gene expression. Alizarin Red S and alkaline phosphatase (ALP) activity were also quantitated. Results: Gingival MSCs (GMSCs) from both the age groups were similar in their morphology and displayed cfu-f. They had similar expression of MSC surface markers and p21, comparable rate of proliferation and differentiated to all the four lineages. GMSCs from young donors had a higher adipogenic differentiation potential as compared to the old GMSCs. Moreover, these cells did not display a significant difference in ALP activity probably due to comparable expression of GR, VDR, and osteogenic genes. Conclusions: Ageing of GMSCs occurs at a much slower rate than stem cells from other sources. Thus we suggest GMSCs as an excellent candidate for autologous stem cell therapy in degenerative diseases of elderly individuals. Clinical Significance: GMSCs could help overcome the setbacks in clinical implementation of autologous stem cell therapy for regenerative medicine in all age group of patient.

Keywords: bone regeneration, cell therapy, senescence, stem cell

Procedia PDF Downloads 162
285 Deciphering Orangutan Drawing Behavior Using Artificial Intelligence

Authors: Benjamin Beltzung, Marie Pelé, Julien P. Renoult, Cédric Sueur

Abstract:

To this day, it is not known if drawing is specifically human behavior or if this behavior finds its origins in ancestor species. An interesting window to enlighten this question is to analyze the drawing behavior in genetically close to human species, such as non-human primate species. A good candidate for this approach is the orangutan, who shares 97% of our genes and exhibits multiple human-like behaviors. Focusing on figurative aspects may not be suitable for orangutans’ drawings, which may appear as scribbles but may have meaning. A manual feature selection would lead to an anthropocentric bias, as the features selected by humans may not match with those relevant for orangutans. In the present study, we used deep learning to analyze the drawings of a female orangutan named Molly († in 2011), who has produced 1,299 drawings in her last five years as part of a behavioral enrichment program at the Tama Zoo in Japan. We investigate multiple ways to decipher Molly’s drawings. First, we demonstrate the existence of differences between seasons by training a deep learning model to classify Molly’s drawings according to the seasons. Then, to understand and interpret these seasonal differences, we analyze how the information spreads within the network, from shallow to deep layers, where early layers encode simple local features and deep layers encode more complex and global information. More precisely, we investigate the impact of feature complexity on classification accuracy through features extraction fed to a Support Vector Machine. Last, we leverage style transfer to dissociate features associated with drawing style from those describing the representational content and analyze the relative importance of these two types of features in explaining seasonal variation. Content features were relevant for the classification, showing the presence of meaning in these non-figurative drawings and the ability of deep learning to decipher these differences. The style of the drawings was also relevant, as style features encoded enough information to have a classification better than random. The accuracy of style features was higher for deeper layers, demonstrating and highlighting the variation of style between seasons in Molly’s drawings. Through this study, we demonstrate how deep learning can help at finding meanings in non-figurative drawings and interpret these differences.

Keywords: cognition, deep learning, drawing behavior, interpretability

Procedia PDF Downloads 136
284 Photophysics and Photochemistry of Cross-Conjugated Y-Shaped Enediyne Fluorophores

Authors: Anuja Singh, Avik K. Pati, Ashok K. Mishra

Abstract:

Organic fluorophores with π-conjugated scaffolds are important because of their interesting optoelectronic properties. In recent years, our lab has been engaged in understanding the photophysics of small diacetylene bridged fluorophores and found the diynes as a promising class of π-conjugated fluorophores. Building on this understanding, recently we have focused on the photophysics of a less explored class of cross-conjugated Y-shaped enediynes (one double and two triple bonds). Here we present the photophysical properties of such enediynes which show interesting photophysical properties that include dual emissions from locally excited (LE) and intramolecular charge transfer (ICT) states and ring size dependent aggregate fluorescence in non-aqueous media. The dyes also show prominent aggregate fluorescence in mixed-aqueous solvents and solid powder form. We further show that the solid state fluorescence can be reversibly switched multiple of cycles by external stimuli, highlighting their potential applications in solid states. The enediynes with push-pull electronic substituents/moieties exhibit high contrast fluorescence color switching upon continuous photon illumination. The intriguing photophysical outcomes of the enediynyl fluorophores are judiciously exploited to generate single-component white light emission in binary solvent mixtures and sense polar aprotic vapor in polymer film matrices. The photophysical behavior of the dyes is further successfully utilized to monitor the microenvironment changes of biologically relevant anisotropic media such as bile salts. In summary, the newly introduced cross-conjugated enediynes enrich the toolbox of organic fluorophores and vouch to display versatile applications.

Keywords: aggregation in solution and solid state, enediynes, physical photochemistry and photophysics, vapor sensing and white light emission

Procedia PDF Downloads 461
283 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index

Procedia PDF Downloads 114
282 “Environmental-Friendly” and “People-Friendly” Project for a New North-East Italian Hospital

Authors: Emanuela Zilli, Antonella Ruffatto, Davide Bonaldo, Stefano Bevilacqua, Tommaso Caputo, Luisa Fontana, Carmelina Saraceno, Antonio Sturaroo, Teodoro Sava, Antonio Madia

Abstract:

The new Hospital in Cittadella - ULSS 6 Euganea Health Trust, in the North-East of Italy (400 beds, project completion date in 2026), will partially take the place of the existing building. Interesting features have been suggested in order to project a modern, “environmental-friendly” and “people-friendly” building. Specific multidisciplinary meetings (involving stakeholders and professionals with different backgrounds) have been organized on a periodic basis in order to guarantee the appropriate implementation of logistic and organizational solutions related to eco-sustainability, integration with the context, and the concept of “design for all” and “humanization of care.” The resulting building will be composed of organic shapes determined by the external environment (sun movement, climate, landscape, pre-existing buildings, roads) and the needs of the internal environment (areas of care and diagnostic-treatment paths reorganized with experience gained during the pandemic), with extensive use of renewable energy, solar panels, a 4th-generation heating system, sanitised and maintainable surfaces. There is particular attention to the quality of the staff areas, which include areas dedicated to psycho-physical well-being (relax points, yoga gym), study rooms, and a centralized conference room. Outdoor recreational spaces and gardens for music and watercolour therapy will be included; atai-chi gym is dedicated to oncology patients. Integration in the urban and social context is emphasized through window placement toward the gardens (maternal-infant, mental health, and rehabilitation wards). Service areas such as dialysis, radiology, and labs have views of the medieval walls, the symbol of the city’s history. The new building has been designed to pursue the maximum level of eco-sustainability, harmony with the environment, and integration with the historical, urban, and social context; the concept of humanization of care has been considered in all the phases of the project management.

Keywords: environmental-friendly, humanization, eco-sustainability, new hospital

Procedia PDF Downloads 83
281 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms

Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita

Abstract:

Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.

Keywords: air quality, internet of things, artificial intelligence, smart home

Procedia PDF Downloads 66
280 Placement of Inflow Control Valve for Horizontal Oil Well

Authors: S. Thanabanjerdsin, F. Srisuriyachai, J. Chewaroungroj

Abstract:

Drilling horizontal well is one of the most cost-effective method to exploit reservoir by increasing exposure area between well and formation. Together with horizontal well technology, intelligent completion is often co-utilized to increases petroleum production by monitoring/control downhole production. Combination of both technological results in an opportunity to lower water cresting phenomenon, a detrimental problem that does not lower only oil recovery but also cause environmental problem due to water disposal. Flow of reservoir fluid is a result from difference between reservoir and wellbore pressure. In horizontal well, reservoir fluid around the heel location enters wellbore at higher rate compared to the toe location. As a consequence, Oil-Water Contact (OWC) at the heel side of moves upward relatively faster compared to the toe side. This causes the well to encounter an early water encroachment problem. Installation of Inflow Control Valve (ICV) in particular sections of horizontal well can involve several parameters such as number of ICV, water cut constrain of each valve, length of each section. This study is mainly focused on optimization of ICV configuration to minimize water production and at the same time, to enhance oil production. A reservoir model consisting of high aspect ratio of oil bearing zone to underneath aquifer is drilled with horizontal well and completed with variation of ICV segments. Optimization of the horizontal well configuration is firstly performed by varying number of ICV, segment length, and individual preset water cut for each segment. Simulation results show that installing ICV can increase oil recovery factor up to 5% of Original Oil In Place (OOIP) and can reduce of produced water depending on ICV segment length as well as ICV parameters. For equally partitioned-ICV segment, more number of segment results in better oil recovery. However, number of segment exceeding 10 may not give a significant additional recovery. In first production period, deformation of OWC strongly depends on number of segment along the well. Higher number of segment results in smoother deformation of OWC. After water breakthrough at heel location segment, the second production period begins. Deformation of OWC is principally dominated by ICV parameters. In certain situations that OWC is unstable such as high production rate, high viscosity fluid above aquifer and strong aquifer, second production period may give wide enough window to ICV parameter to take the roll.

Keywords: horizontal well, water cresting, inflow control valve, reservoir simulation

Procedia PDF Downloads 391
279 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 231
278 Effects of Work Stress and Chinese Indigenous Ren-Qing Shi-Ku Social Wisdom on Emotional Exhaustion, Work Satisfaction and Well-Being of Insurance Workers

Authors: Wang Chung-Kwei, Lo Kuo Ying

Abstract:

This study is aimed to examine main and moderation effect of Chinese traditional social wisdom ‘Ren-qing Shi-kuo’ on the adjustment of insurance workers. Rationale: Ren-qing Shi-ku as a social wisdom has been emphasized and practiced by collective-oriented Chinese for thousand years. The concept of‘Ren-qing Shi-ku’includes values, beliefs and behavior rituals, which helps Chinese to cope with interpersonal conflicts in a sophisticated and closely tied collective society. Based on interview and literature review, we found out Chinese still emphasized the importance of ‘Ren-qing Shi-ku’. The concepts contains five factors, including ‘proper emotion display’, ‘social ritual abiding’, ‘ make empathetic concession’, ‘harmonious and proper behavior’ and ‘tolerance for the interest of the whole’. We developed an indigenous ‘Ren-qing Shi-ku’scale based on interview data and a survey on social worker students. Research methods: We conduct a dyad survey between 294 insurance worker and their supervisors. Insurance workers’ response on ‘Ren-qing Shi-ku,emotion labor, emotional exhaustion, work stress and load, work satisfaction and well-being were collected. We also ask their supervisors to rate these workers ‘empathy, social rule abiding, work performance, and Ren-qing Shi-ku performance. Results: Students’self-ratings on Ren-qing Shi-ku scale are positively correlated with rating from their supervisors on all above indexes. Workers who have higher Ren-qing Shi-ku score also have lower work stress and emotion exhaustion, higher work satisfaction and well-being, more emotion deep acting. They also have higher work performance, social rule abiding, and Ren-qing Shi-ku performance rating from their supervisor. The finding of this study suggested Ren-qing Shi-ku is an effective indicator on insurance workers ‘adjustment. Since Ren-qing Shi-ku is trainable, we suggested that Ren-qing Shi-ku training might be beneficial to service industry in a collective-oriented culture.

Keywords: work stress, Ren-qing Shi-ku, emotional exhaustion, work satisfaction, well-being

Procedia PDF Downloads 457
277 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses

Authors: Awais Ahmed, Javed Iqbal

Abstract:

Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.

Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system

Procedia PDF Downloads 353
276 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder

Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo

Abstract:

Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.

Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion

Procedia PDF Downloads 100
275 Cartography through Picasso’s Eyes

Authors: Desiree Di Marco

Abstract:

The aim of this work is to show through the lens of art first which kind of reality was the one represented through fascist maps, and second to study the impact of the fascist regime’s cartography (FRC) on observers eye’s. In this study, it is assumed that the FRC’s representation of reality was simplified, timeless, and even a-spatial because it underrates the concept of territoriality. Cubism and Picasso’s paintings will be used as counter-examples to mystify fascist cartography’s ideological assumptions. The difference between the gaze of an observer looking at the surface of a fascist map and the gaze of someone observing a Picasso painting is impressive. Because there is always something dark, hidden, behind and inside a map, the world of fascist maps was a world built starting from the observation of a “window” that distorted reality and trapped the eyes of the observers. Moving across the map, they seem as if they were hypnotized. Cartohypnosis is the state in which the observer finds himself enslaved by the attractive force of the map, which uses a sort of “magic” geography, a geography that, by means of symbolic language, never has as its primary objective the attempt to show us reality in its complexity, but that of performing for its audience. Magical geography and hypnotic cartography in fascism blended together, creating an almost mystical, magical relationship that demystified reality to reduce the world to a conquerable space. This reduction offered the observer the possibility of conceiving new dimensions: of the limit, of the boundary, elements with which the subject felt fully involved and in which the aesthetic force of the images demonstrated all its strength. But in the early 20th century, the combination of art and cartography gave rise to new possibilities. Cubism which, more than all the other artistic currents showed us how much the observation of reality from a single point of view falls within dangerous logic, is an example. Cubism was an artistic movement that brought about a profound transformation in pictorial culture. It was not only a revolution of pictorial space, but it was a revolution of our conception of pictorial space. Up until that time, men and women were more inclined to believe in the power of images and their representations. Cubist painters rebelled against this blindness by claiming that art must always offer an alternative. Indeed the contribution of this work is precisely to show how art can be able to provide alternatives to even the most horrible regimes and the most atrocious human misfortunes. It also enriches the field of cartography because it "reassures" it by showing how much good it can be for cartography if also for other disciplines come close. Only in this way researcher can increase the chances for the cartography of a greater diffusion at the academic level.

Keywords: cartography, Picasso, fascism, culture

Procedia PDF Downloads 45
274 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI

Procedia PDF Downloads 101
273 Long-Term Effect of Dialysis Therapy for Osteoporosis and Extra-Osseous Calcification in Chronic Renal Failure

Authors: Itsuo Yokoyama, Rikako Kikuti, Naoko Watabe, Tosinori Asai, Sarai Tsuyoshi

Abstract:

Introduction: Chronic kidney disease presents significant changes in mineral and bone metabolism, referred to as CKD-MBD. These changes lead to decreased bone mass, heightened bone fragility, fractures, and increased vascular and valvular calcification, ultimately impacting cardiovascular outcomes. Key contributors to these complications in dialysis patients include calcium, phosphate, parathyroid hormone (PTH), fibroblast growth factor 23 (FGF23), and the vitamin D hormonal system. Methods: In our outpatient dialysis clinic, we monitor the long-term effects of vascular calcifications by calculating the volume of calcified areas in the abdominal aorta based on CT scan data. The results revealed a progressive nature of vascular calcification. To extend our study, we measured the volume of calcification in bones (vertebrae and femur) corresponding to Hounsfield units of 200 and 300. The study aims to investigate changes in osteoporosis during a 5-year follow-up period and its relationship with extraosseous calcification. Results and Considerations: While extraosseous calcification demonstrated a generally progressive nature, often resistant to medical treatment, the degree of osteoporotic change varied among patients. The majority exhibited continuous osteoporotic changes, while some showed improvement or minimal changes in bone calcification. Variations in the distribution and magnitude of osteoporotic changes were observed between groups based on the timing of hemodialysis initiation during the study. The former group tended to display more osteoporotic changes, possibly attributed to differences in medication between the groups. Other contributing factors may include the patient's age, duration of dialysis, or causes of renal disease. In conclusion, we emphasize the importance of carefully monitoring calcium and phosphate levels and maintaining adequate dialysis therapy to prevent osteoporosis in dialysis patients.

Keywords: CKD-MBD, dialysis, calcification, kidney

Procedia PDF Downloads 25
272 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: dependency, story-cost, cost modes, engineering demand parameter

Procedia PDF Downloads 157
271 FEM and Experimental Modal Analysis of Computer Mount

Authors: Vishwajit Ghatge, David Looper

Abstract:

Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.

Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration

Procedia PDF Downloads 304
270 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 141
269 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry

Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim

Abstract:

Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).

Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method

Procedia PDF Downloads 268
268 The Optimal Irrigation in the Mitidja Plain

Authors: Gherbi Khadidja

Abstract:

In the Mediterranean region, water resources are limited and very unevenly distributed in space and time. The main objective of this project is the development of a wireless network for the management of water resources in northern Algeria, the Mitidja plain, which helps farmers to irrigate in the most optimized way and solve the problem of water shortage in the region. Therefore, we will develop an aid tool that can modernize and replace some traditional techniques, according to the real needs of the crops and according to the soil conditions as well as the climatic conditions (soil moisture, precipitation, characteristics of the unsaturated zone), These data are collected in real-time by sensors and analyzed by an algorithm and displayed on a mobile application and the website. The results are essential information and alerts with recommendations for action to farmers to ensure the sustainability of the agricultural sector under water shortage conditions. In the first part: We want to set up a wireless sensor network, for precise management of water resources, by presenting another type of equipment that allows us to measure the water content of the soil, such as the Watermark probe connected to the sensor via the acquisition card and an Arduino Uno, which allows collecting the captured data and then program them transmitted via a GSM module that will send these data to a web site and store them in a database for a later study. In a second part: We want to display the results on a website or a mobile application using the database to remotely manage our smart irrigation system, which allows the farmer to use this technology and offers the possibility to the growers to access remotely via wireless communication to see the field conditions and the irrigation operation, at home or at the office. The tool to be developed will be based on satellite imagery as regards land use and soil moisture. These tools will make it possible to follow the evolution of the needs of the cultures in time, but also to time, and also to predict the impact on water resources. According to the references consulted, if such a tool is used, it can reduce irrigation volumes by up to up to 40%, which represents more than 100 million m3 of savings per year for the Mitidja. This volume is equivalent to a medium-size dam.

Keywords: optimal irrigation, soil moisture, smart irrigation, water management

Procedia PDF Downloads 88