Search results for: feature noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2557

Search results for: feature noise

277 A Discussion on Urban Planning Methods after Globalization within the Context of Anticipatory Systems

Authors: Ceylan Sozer, Ece Ceylan Baba

Abstract:

The reforms and changes that began with industrialization in cities and continued with globalization in 1980’s, created many changes in urban environments. City centers which are desolated due to industrialization, began to get crowded with globalization and became the heart of technology, commerce and social activities. While the immediate and intense alterations are planned around rigorous visions in developed countries, several urban areas where the processes were underestimated and not taken precaution faced with irrevocable situations. When the effects of the globalization in the cities are examined, it is seen that there are some anticipatory system plans in the cities about the future problems. Several cities such as New York, London and Tokyo have planned to resolve probable future problems in a systematic scheme to decrease possible side effects during globalization. The decisions in urban planning and their applications are the main points in terms of sustainability and livability in such mega-cities. This article examines the effects of globalization on urban planning through 3 mega cities and the applications. When the applications of urban plannings of the three mega-cities are investigated, it is seen that the city plans are generated under light of past experiences and predictions of a certain future. In urban planning, past and present experiences of a city should have been examined and then future projections could be predicted together with current world dynamics by a systematic way. In this study, methods used in urban planning will be discussed and ‘Anticipatory System’ model will be explained and relations with global-urban planning will be discussed. The concept of ‘anticipation’ is a phenomenon that means creating foresights and predictions about the future by combining past, present and future within an action plan. The main distinctive feature that separates anticipatory systems from other systems is the combination of past, present and future and concluding with an act. Urban plans that consist of various parameters and interactions together are identified as ‘live’ and they have systematic integrities. Urban planning with an anticipatory system might be alive and can foresight some ‘side effects’ in design processes. After globalization, cities became more complex and should be designed within an anticipatory system model. These cities can be more livable and can have sustainable urban conditions for today and future.In this study, urban planning of Istanbul city is going to be analyzed with comparisons of New York, Tokyo and London city plans in terms of anticipatory system models. The lack of a system in İstanbul and its side effects will be discussed. When past and present actions in urban planning are approached through an anticipatory system, it can give more accurate and sustainable results in the future.

Keywords: globalization, urban planning, anticipatory system, New York, London, Tokyo, Istanbul

Procedia PDF Downloads 122
276 Stereological and Morphometric Evaluation of Wound Healing Burns Treated with Ulmo Honey (Eucryphia cordifolia) Unsupplemented and Supplemented with Ascorbic Acid in Guinea Pig (Cavia porcellus)

Authors: Carolina Schencke, Cristian Sandoval, Belgica Vasquez, Mariano Del Sol

Abstract:

Introduction: In a burn injury, the successful repair requires not only the participation of various cells, such as granulocytes and fibroblasts, but also of collagen, which plays a crucial role as a structural and regulatory molecule of scar tissue. Since honey and ascorbic acid have presented a great therapeutic potential to cellular and structural level, experimental studies have proposed its combination in the treatment of wounds. Aim: To evaluate stereological and morphometric parameters of healing wounds, caused by burns, treated with honey Ulmo (Eucryphia cordifolia) unsupplemented, comparing its effect with Ulmo honey supplemented with ascorbic acid. Materials and Methods: Fifteen healthy adult guinea pigs (Cavia porcellus) were used, of both sexes, average weight 450 g from the Centro de Excelencia en Estudios Morfológicos y Quirúrgicos (CEMyQ) at the Universidad de La Frontera, Chile. The animals were divided at random into three groups: positive control (C+), honey only (H) and supplemented honey (SH) and were fed on pellets supplemented with ascorbic acid and water ad libitum, under ambient conditions controlled for temperature, ambient noise and a cycle of 12h light–darkness. The protocol for the experiment was approved by the Scientific Ethics Committee of the Universidad de La Frontera, Chile. The parameters measured were number density per area (NA), volume density (VV), and surface density (SV) of fibroblast; NA and VV of polymorphonuclear cells (PMN) and, evaluation of the content of collagen fibers in the scar dermis. One-way ANOVA was used for statistics analysis and its respective Post hoc tests. Results: The ANOVA analysis for NA, VV and SV of fibroblasts, NA and VV of PMN, and evaluation of collagen content, type I and III, showed that at least one group differs from other (P≤ 0.001). There were differences (P= 0.000) in NA of fibroblast between the groups [C+= 3599.560 mm-2 (SD= 764.461), H= 3355.336 mm-2 (SD= 699.443) and SH= 4253.025 mm-2 (SD= 1041.751)]. The VV and SV of fibroblast increased (P= 0.000) in the SH group [20.400% (SD= 5.897) and 100.876 mm2/mm3 (SD= 29.431), respectively], compared to the C+ [16.324% (SD= 7.719) and 81.676 mm2/mm3 (SD= 28.884), respectively). The mean values of NA and VV of PMN were higher (P= 0.000) in the H [756.875 mm-2 (SD= 516.489) and 2.686% (SD= 2.380), respectively) group. Regarding to the evaluation of the content of collagen fibers, type I and III, the one-way analysis of ANOVA showed a statistically significant difference (P< 0.05). The content of collagen fibers type I was higher in C+ (1988.292 μm2; SD= 1312.379), while the content of collagen fibers type III was higher in SH (1967.163 μm2; SD= 1047.944 μm2) group. Conclusions: The stereological results were correlated with the stage of healing observed for each group. These results suggest that the combination of honey with ascorbic acid potentiate the healing effect, where both participated synergistically.

Keywords: ascorbic acid, morphometry, stereology, Ulmo honey

Procedia PDF Downloads 251
275 Sequential Mixed Methods Study to Examine the Potentiality of Blackboard-Based Collaborative Writing as a Solution Tool for Saudi Undergraduate EFL Students’ Writing Difficulties

Authors: Norah Alosayl

Abstract:

English is considered the most important foreign language in the Kingdom of Saudi Arabia (KSA) because of the usefulness of English as a global language compared to Arabic. As students’ desire to improve their English language skills has grown, English writing has been identified as the most difficult problem for Saudi students in their language learning. Although the English language in Saudi Arabia is taught beginning in the seventh grade, many students have problems at the university level, especially in writing, due to a gap between what is taught in secondary and high schools and university expectations- pupils generally study English at school, based on one book with few exercises in vocabulary and grammar exercises, and there are no specific writing lessons. Moreover, from personal teaching experience at King Saud bin Abdulaziz University, students face real problems with their writing. This paper revolves around the blackboard-based collaborative writing to help the undergraduate Saudi EFL students, in their first year enrolled in two sections of ENGL 101 in the first semester of 2021 at King Saud bin Abdulaziz University, practice the most difficult skill they found in their writing through a small group. Therefore, a sequential mixed methods design will be suited. The first phase of the study aims to highlight the most difficult skill experienced by students from an official writing exam that is evaluated by their teachers through an official rubric used in King Saud bin Abdulaziz University. In the second phase, this study will intend to investigate the benefits of social interaction on the process of learning writing. Students will be provided with five collaborative writing tasks via discussion feature on Blackboard to practice a skill that they found difficult in writing. the tasks will be formed based on social constructivist theory and pedagogic frameworks. The interaction will take place between peers and their teachers. The frequencies of students’ participation and the quality of their interaction will be observed through manual counting, screenshotting. This will help the researcher understand how students actively work on the task through the amount of their participation and will also distinguish the type of interaction (on task, about task, or off-task). Semi-structured interviews will be conducted with students to understand their perceptions about the blackboard-based collaborative writing tasks, and questionnaires will be distributed to identify students’ attitudes with the tasks.

Keywords: writing difficulties, blackboard-based collaborative writing, process of learning writing, interaction, participations

Procedia PDF Downloads 165
274 Comparative Study of Urban Structure between an Island-Type and a General-Type City

Authors: Tomoya Oshiro, Hiroko Ono

Abstract:

Japan's aging population is increasing due to the decrease in birthrate. It causes various problems like the decrease in the gross domestic product of the country. The reason is why the local government of Japan has been on the way to a sustainable city recently. Then it is essential to get control of an urban structure to make the compact city successful. There are many kinds of paper about the compact city; however, the paper about a compact city of the island-type city is less. The purpose of this study is to clarify difference of urban structure between an island-type and a general city type. The method which has conducted in this research has two steps. First of all, by using evaluation indexes in the handbook, we evaluated the urban structures among each same -population-class cities from 50,000 to 100,000 people. Next, to clear the difference about the urban structure and feature between island-type and general-type cities compare the radar chart which is composed with each evaluation indexes of urban structure. Moreover, in order to clarify the relationship between evaluation indexes and the place of residence by using GIS software to show up population density on the map. As a result of this research, the management of local government and the local economy in evaluation indexes are indicated to be negative point in comparison of island-type cities with general cities. However, evaluation indexes of safety/security and low-carbon/energy are proved to be positive point. The research to find the difference features of the island-type of urban structure proves that the management of local government or the local economy is negative point in these island-type cities. In addition, the public transportation coverage in Miyako Island, Sado Island, and Amakusa Island show low value compare with other islands and average value. Relationship between evaluation indexes of an urban structure and the place of residence prove that the place of residence is related to public transportation coverage. If the place of residence is spread out, the public transportation coverage will be decreased. The results of this research reveal that the finances in island-type cities are negative point compare to general cities. This problem is caused by declining population. In addition, the place of residence is related to the public transportation coverage. Even though, it needs a much money to increase the public transportation coverage. It is possibly to cause other problems furthermore the aspect of finance is influenced by that as well. The conclusion in this research suggests that it is important for creating the compact city in island-type cities that we first need to address solving the problems about the management of local government and the local economy.

Keywords: sustainable city, comparative analysis, geographic information system, urban structure

Procedia PDF Downloads 122
273 Water Supply and Demand Analysis for Ranchi City under Climate Change Using Water Evaluation and Planning System Model

Authors: Pappu Kumar, Ajai Singh, Anshuman Singh

Abstract:

There are different water user sectors such as rural, urban, mining, subsistence and commercial irrigated agriculture, commercial forestry, industry, power generation which are present in the catchment in Subarnarekha River Basin and Ranchi city. There is an inequity issue in the access to water. The development of the rural area, construction of new power generation plants, along with the population growth, the requirement of unmet water demand and the consideration of environmental flows, the revitalization of small-scale irrigation schemes is going to increase the water demands in almost all the water-stressed catchment. The WEAP Model was developed by the Stockholm Environment Institute (SEI) to enable evaluation of planning and management issues associated with water resources development. The WEAP model can be used for both urban and rural areas and can address a wide range of issues including sectoral demand analyses, water conservation, water rights and allocation priorities, river flow simulation, reservoir operation, ecosystem requirements and project cost-benefit analyses. This model is a tool for integrated water resource management and planning like, forecasting water demand, supply, inflows, outflows, water use, reuse, water quality, priority areas and Hydropower generation, In the present study, efforts have been made to access the utility of the WEAP model for water supply and demand analysis for Ranchi city. A detailed works have been carried out and it was tried to ascertain that the WEAP model used for generating different scenario of water requirement, which could help for the future planning of water. The water supplied to Ranchi city was mostly contributed by our study river, Hatiya reservoir and ground water. Data was collected from various agencies like PHE Ranchi, census data of 2011, Doranda reservoir and meteorology department etc. This collected and generated data was given as input to the WEAP model. The model generated the trends for discharge of our study river up to next 2050 and same time also generated scenarios calculating our demand and supplies for feature. The results generated from the model outputs predicting the water require 12 million litter. The results will help in drafting policies for future regarding water supplies and demands under changing climatic scenarios.

Keywords: WEAP model, water demand analysis, Ranchi, scenarios

Procedia PDF Downloads 396
272 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR

Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman

Abstract:

Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.

Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography

Procedia PDF Downloads 455
271 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building

Authors: A. Schuchter, M. Promegger

Abstract:

The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.

Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning

Procedia PDF Downloads 97
270 Informal Green Infrastructure as Mobility Enabler in Informal Settlements of Quito

Authors: Ignacio W. Loor

Abstract:

In the context of informal settlements in Quito, this paper provides evidence that slopes and deep ravines typical of Andean cities, around which marginalized urban communities sit, constitute a platform for green infrastructure that supports mobility for pedestrians in an incremental fashion. This is informally shaped green infrastructure that provides connectivity to other mobility infrastructures such as roads and public transport, which permits relegated dwellers reach their daily destinations and reclaim their rights to the city. This is relevant in that walking has been increasingly neglected as a viable mean of transport in Latin American cities, in favor of rather motorized means, for which the mobility benefits of green infrastructure have remained invisible to policymakers, contributing to the progressive isolation of informal settlements. This research leverages greatly on an ecological rejuvenation programme led by the municipality of Quito and the Andean Corporation for Development (CAN) intended for rehabilitating the ecological functionalities of ravines. Accordingly, four ravines in different stages of rejuvenation were chosen, in order to through ethnographic methods, capture the practices they support to dwellers of informal settlements across different stages, particularly in terms of issues of mobility. Then, by presenting fragments of interviews, description of observed phenomena, photographs and narratives published in institutional reports and media, the production process of mobility infrastructure over unoccupied slopes and ravines, and the roles that this infrastructure plays in the mobility of dwellers and their quotidian practices are explained. For informal settlements, which normally feature scant urban infrastructure, mobility embodies an unfavourable driver for the possibilities of dwellers to actively participate in the social, economic and political dimensions of the city, for which their rights to the city are widely neglected. Nevertheless, informal green infrastructure for mobility provides some alleviation. This infrastructure is incremental, since its features and usability gradually evolves as users put into it knowledge, labour, devices, and connectivity to other infrastructures in different dimensions which increment its dependability. This is evidenced in the diffusion of knowledge of trails and routes of footpaths among users, the implementation of linking stairs and bridges, the improved access by producing public spaces adjacent to the ravines, the illuminating of surrounding roads, and ultimately, the restoring of ecological functions of ravines. However, the perpetuity of this type of infrastructure is also fragile and vulnerable to the course of urbanisation, densification, and expansion of gated privatised spaces.

Keywords: green infrastructure, informal settlements, urban mobility, walkability

Procedia PDF Downloads 128
269 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 87
268 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 131
267 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 51
266 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 29
265 Personalized Infectious Disease Risk Prediction System: A Knowledge Model

Authors: Retno A. Vinarti, Lucy M. Hederman

Abstract:

This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.

Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk

Procedia PDF Downloads 208
264 On Grammatical Metaphors: A Corpus-Based Reflection on the Academic Texts Written in the Field of Environmental Management

Authors: Masoomeh Estaji, Ahdie Tahamtani

Abstract:

Considering the necessity of conducting research and publishing academic papers during Master’s and Ph.D. programs, graduate students are in dire need of improving their writing skills through either writing courses or self-study planning. One key feature that could aid academic papers to look more sophisticated is the application of grammatical metaphors (GMs). These types of metaphors represent the ‘non-congruent’ and ‘implicit’ ways of decoding meaning through which one grammatical category is replaced by another, more implied counterpart, which can alter the readers’ understanding of the text as well. Although a number of studies have been conducted on the application of GMs across various disciplines, almost none has been devoted to the field of environmental management, and the scope of the previous studies has been relatively limited compared to the present work. In the current study, attempts were made to analyze different types of GMs used in academic papers published in top-tiered journals in the field of environmental management, and make a list of the most frequently used GMs based on their functions in this particular discipline to make the teaching of academic writing courses more explicit and the composition of academic texts more well-structured. To fulfill these purposes, a corpus-based analysis based on the two theoretical models of Martin et al. (1997) and Liardet (2014) was run. Through two stages of manual analysis and concordancers, ten recent academic articles entailing 132490 words published in two prestigious journals were precisely scrutinized. The results yielded that through the whole IMRaD sections of the articles, among all types of ideational GMs, material processes were the most frequent types. The second and the third ranks would apply to the relational and mental categories, respectively. Regarding the use of interpersonal GMs, objective expanding metaphors were the highest in number. In contrast, subjective interpersonal metaphors, either expanding or contracting, were the least significant. This would suggest that scholars in the field of Environmental Management tended to shift the focus on the main procedures and explain technical phenomenon in detail, rather than to compare and contrast other statements and subjective beliefs. Moreover, since no instances of verbal ideational metaphors were detected, it could be deduced that the act of ‘saying or articulating’ something might be against the standards of the academic genre. One other assumption would be that the application of ideational GMs is context-embedded and that the more technical they are, the least frequent they become. For further studies, it is suggested that the employment of GMs to be studied in a wider scope and other disciplines, and the third type of GMs known as ‘textual’ metaphors to be included as well.

Keywords: English for specific purposes, grammatical metaphor, academic texts, corpus-based analysis

Procedia PDF Downloads 142
263 Complaint Management Mechanism: A Workplace Solution in Development Sector of Bangladesh

Authors: Nusrat Zabeen Islam

Abstract:

Partnership between local Non-Government organizations (NGO) and International development organizations has become an important feature in the development sector of Bangladesh. It is an important challenge for International development organizations to work with local NGOs with proper HR practice. Local NGOs have a lack of quality working environment and this affects the employee’s work experiences and overall performance at individual, partnership with International development organizations and organizational level. Many local development organizations due to the size of the organization and scope do not have a human resource (HR) unit. Inadequate Human Resource Policies, skills, leadership and lack of effective strategy is now a common scenario in Non-Government organization sector of Bangladesh. So corruption, nepotism, and fraud, risk of Political Contribution in office /work space, Sexual/ gender based abuse, insecurity take place in work place of development sector. The Complaint Management Mechanism (CMM) in human resource management could be one way to improve human resource competence in these organizations. The responsibility of Complaint Management Unit (CMU) of an International development organization is to make workplace maltreating, discriminating communities free. The information of impact of CMM was collected through case study of an International organization and some of its partner national organizations in Bangladesh who are engaged in different projects/programs. In this mechanism International development organizations collect complaints from beneficiaries/ staffs by complaint management unit and investigate by segregating the type and mood of the complaint and find out solution to improve the situation within a very short period. A complaint management committee is formed jointly with HR and management personnel. Concerned focal point collect complaints and share with CM unit. By conducting investigation, review of findings, reply back to CM unit and implementation of resolution through this mechanism, a successful bridge of communication and feedback can be established within beneficiaries, staffs and upper management. The overall result of Complaint management mechanism application indicates that by applying CMM accountability and transparency of workplace and workforce in development organization can be increased significantly. Evaluations based on outcomes, and measuring indicators such as productivity, satisfaction, retention, gender equity, proper judgment will guide organizations in building a healthy workforce, and will also clearly articulate the return on investment and justify any need for further funding.

Keywords: human resource management in NGOs, challenges in human resource, workplace environment, complaint management mechanism

Procedia PDF Downloads 292
262 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation

Authors: Nawras Kurzom, Avi Mendelsohn

Abstract:

The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.

Keywords: musical tension, declarative memory, learning and memory, musical perception

Procedia PDF Downloads 69
261 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 116
260 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 22
259 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 114
258 A Numerical Studies for Improving the Performance of Vertical Axis Wind Turbine by a Wind Power Tower

Authors: Soo-Yong Cho, Chong-Hyun Cho, Chae-Whan Rim, Sang-Kyu Choi, Jin-Gyun Kim, Ju-Seok Nam

Abstract:

Recently, vertical axis wind turbines (VAWT) have been widely used to produce electricity even in urban. They have several merits such as low sound noise, easy installation of the generator and simple structure without yaw-control mechanism and so on. However, their blades are operated under the influence of the trailing vortices generated by the preceding blades. This phenomenon deteriorates its output power and makes difficulty predicting correctly its performance. In order to improve the performance of VAWT, wind power towers can be applied. Usually, the wind power tower can be constructed as a multi-story building to increase the frontal area of the wind stream. Hence, multiple sets of the VAWT can be installed within the wind power tower, and they can be operated at high elevation. Many different types of wind power tower can be used in the field. In this study, a wind power tower with circular column shape was applied, and the VAWT was installed at the center of the wind power tower. Seven guide walls were used as a strut between the floors of the wind power tower. These guide walls were utilized not only to increase the wind velocity within the wind power tower but also to adjust the wind direction for making a better working condition on the VAWT. Hence, some important design variables, such as the distance between the wind turbine and the guide wall, the outer diameter of the wind power tower, the direction of the guide wall against the wind direction, should be considered to enhance the output power on the VAWT. A numerical analysis was conducted to find the optimum dimension on design variables by using the computational fluid dynamics (CFD) among many prediction methods. The CFD could be an accurate prediction method compared with the stream-tube methods. In order to obtain the accurate results in the CFD, it needs the transient analysis and the full three-dimensional (3-D) computation. However, this full 3-D CFD could be hard to be a practical tool because it requires huge computation time. Therefore, the reduced computational domain is applied as a practical method. In this study, the computations were conducted in the reduced computational domain and they were compared with the experimental results in the literature. It was examined the mechanism of the difference between the experimental results and the computational results. The computed results showed this computational method could be an effective method in the design methodology using the optimization algorithm. After validation of the numerical method, the CFD on the wind power tower was conducted with the important design variables affecting the performance of VAWT. The results showed that the output power of the VAWT obtained using the wind power tower was increased compared to them obtained without the wind power tower. In addition, they showed that the increased output power on the wind turbine depended greatly on the dimension of the guide wall.

Keywords: CFD, performance, VAWT, wind power tower

Procedia PDF Downloads 356
257 Nigerian Media Coverage of the Chibok Girls Kidnap: A Qualitative News Framing Analysis of the Nation Newspaper

Authors: Samuel O. Oduyela

Abstract:

Over the last ten years, many studies have examined the media coverage of terrorism across the world. Nevertheless, most of these studies have been inclined to the western narrative, more so in relation to the international media. This study departs from that partiality to explore the Nigerian press and its coverage of the Boko Haram. The study intends to illustrate how the Nigerian press has reported its homegrown terrorism within its borders. On 14 April 2014, the Shekau-led Boko Haram kidnapped over 200 female students from Chibok in the Borno State. This study analyses a structured sample of news stories, feature articles, editorial comments, and opinions from the Nation newspaper. The study examined the representation of the Chibok girls kidnaps by concentrating on four main viewpoints. The news framing of the Chibok girls’ kidnap under Presidents Goodluck Jonathan (2014) and Mohammadu Buhari (2016-2018), the sourcing model present in the news reporting of the kidnap and the challenges Nation reporters face in reporting Boko Haram. The study adopted the use of qualitative news framing analysis to provide further insights into significant developments established from the examination of news contents. The study found that the news reportage mainly focused on the government response to Chibok girls kidnap, international press and Boko Haram. Boko Haram was also framed, as a political conspiracy, as prevailing, and as instilling fear. Political, and economic influence appeared to be a significant determinant of the reportage. The study found that the Nation newspaper's portrayal of the crisis under President Jonathan differed significantly from under President Buhari. While the newspaper framed the action of President Jonathan as lacklustre, dismissive, and confusing, it was less critical of President Buhari's government's handling of the crisis. The Nation newspaper failed to promote or explore non-violent approaches. News reports of the kidnap, thus, were presented mainly from a political and ethnoreligious perspective. The study also raised questions of what roles should journalists play in covering conflicts? Should they merely report comments on and interpret it, or should they be actors in the resolution or, more importantly, the prevention of conflicts? The study underlined the need for the independence of the media, more training for journalists to advance a more nuanced and conflict-sensitive news coverage in the Nigerian context.

Keywords: boko haram, chibok girls kidnap, conflict in nigeria, media framing

Procedia PDF Downloads 117
256 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 181
255 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 464
254 Educational Infrastructure a Barrier for Teaching and Learning Architecture

Authors: Alejandra Torres-Landa López

Abstract:

Introduction: Can architecture students be creative in spaces conformed by an educational infrastructure build with paradigms of the past?, this question and others related are answered in this paper as it presents the PhD research: An anthropic conflict in Mexican Higher Education Institutes, problems and challenges of the educational infrastructure in teaching and learning History of Architecture. This research was finished in 2013 and is one of the first studies conducted nationwide in Mexico that analysis the educational infrastructure impact in learning architecture; its objective was to identify which elements of the educational infrastructure of Mexican Higher Education Institutes where architects are formed, hinder or contribute to the teaching and learning of History of Architecture; how and why it happens. The methodology: A mixed methodology was used combining quantitative and qualitative analysis. Different resources and strategies for data collection were used, such as questionnaires for students and teachers, interviews to architecture research experts, direct observations in Architecture classes, among others; the data collected was analyses using SPSS and MAXQDA. The veracity of the quantitative data was supported by the Cronbach’s Alpha Coefficient, obtaining a 0.86, figure that gives the data enough support. All the above enabled to certify the anthropic conflict in which Mexican Universities are. Major findings of the study: Although some of findings were probably not unknown, they haven’t been systematized and analyzed with the depth to which it’s done in this research. So, it can be said, that the educational infrastructure of most of the Higher Education Institutes studied, is a barrier to the educational process, some of the reasons are: the little morphological variation of space, the inadequate control of lighting, noise, temperature, equipment and furniture, the poor or none accessibility for disable people; as well as the absence, obsolescence and / or insufficiency of information technologies are some of the issues that generate an anthropic conflict understanding it as the trouble that teachers and students have to relate between them, in order to achieve significant learning). It is clear that most of the educational infrastructure of Mexican Higher Education Institutes is anchored to paradigms of the past; it seems that they respond to the previous era of industrialization. The results confirm that the educational infrastructure of Mexican Higher Education Institutes where architects are formed, is perceived as a "closed container" of people and data; infrastructure that becomes a barrier to teaching and learning process. Conclusion: The research results show it's time to change the paradigm in which we conceive the educational infrastructure, it’s time to stop seen it just only as classrooms, workshops, laboratories and libraries, as it must be seen from a constructive, urban, architectural and human point of view, taking into account their different dimensions: physical, technological, documental, social, among others; so the educational infrastructure can become a set of elements that organize and create spaces where ideas and thoughts can be shared; to be a social catalyst where people can interact between each other and with the space itself.

Keywords: educational infrastructure, impact of space in learning architecture outcomes, learning environments, teaching architecture, learning architecture

Procedia PDF Downloads 378
253 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 89
252 Enhanced Stability of Piezoelectric Crystalline Phase of Poly(Vinylidene Fluoride) (PVDF) and Its Copolymer upon Epitaxial Relationships

Authors: Devi Eka Septiyani Arifin, Jrjeng Ruan

Abstract:

As an approach to manipulate the performance of polymer thin film, epitaxy crystallization within polymer blends of poly(vinylidene fluoride) (PVDF) and its copolymer poly(vinylidene fluoride-trifluoroethylene) P(VDF-TrFE) was studied in this research, which involves the competition between phase separation and crystal growth of constitutive semicrystalline polymers. The unique piezoelectric feature of poly(vinylidene fluoride) crystalline phase is derived from the packing of molecular chains in all-trans conformation, which spatially arranges all the substituted fluorene atoms on one side of the molecular chain and hydrogen atoms on the other side. Therefore, the net dipole moment is induced across the lateral packing of molecular chains. Nevertheless, due to the mutual repulsion among fluorene atoms, this all-trans molecular conformation is not stable, and ready to change above curie temperature, where thermal energy is sufficient to cause segmental rotation. This research attempts to explore whether the epitaxial interactions between piezoelectric crystals and crystal lattice of hexamethylbenzene (HMB) crystalline platelet is able to stabilize this metastable all-trans molecular conformation or not. As an aromatic crystalline compound, the melt of HMB was surprisingly found able to dissolve the poly(vinylidene fluoride), resulting in homogeneous eutectic solution. Thus, after quenching this binary eutectic mixture to room temperature, subsequent heating or annealing processes were designed to explore the involve phase separation and crystallization behavior. The phase transition behaviors were observed in-situ by X-ray diffraction and differential scanning calorimetry (DSC). The molecular packing was observed via transmission electron microscope (TEM) and the principles of electron diffraction were brought to study the internal crystal structure epitaxially developed within thin films. Obtained results clearly indicated the occurrence of heteroepitaxy of PVDF/PVDF-TrFE on HMB crystalline platelet. Both the concentration of poly(vinylidene fluoride) and the mixing ratios of these two constitutive polymers have been adopted as the influential factors for studying the competition between the epitaxial crystallization of PVDF and P(VDF-TrFE) on HMB crystalline. Furthermore, the involved epitaxial relationship is to be deciphered and studied as a potential factor capable of guiding the wide spread of piezoelectric crystalline form.

Keywords: epitaxy, crystallization, crystalline platelet, thin film and mixing ratio

Procedia PDF Downloads 197
251 Distinguishing between Bacterial and Viral Infections Based on Peripheral Human Blood Tests Using Infrared Microscopy and Multivariate Analysis

Authors: H. Agbaria, A. Salman, M. Huleihel, G. Beck, D. H. Rich, S. Mordechai, J. Kapelushnik

Abstract:

Viral and bacterial infections are responsible for variety of diseases. These infections have similar symptoms like fever, sneezing, inflammation, vomiting, diarrhea and fatigue. Thus, physicians may encounter difficulties in distinguishing between viral and bacterial infections based on these symptoms. Bacterial infections differ from viral infections in many other important respects regarding the response to various medications and the structure of the organisms. In many cases, it is difficult to know the origin of the infection. The physician orders a blood, urine test, or 'culture test' of tissue to diagnose the infection type when it is necessary. Using these methods, the time that elapses between the receipt of patient material and the presentation of the test results to the clinician is typically too long ( > 24 hours). This time is crucial in many cases for saving the life of the patient and for planning the right medical treatment. Thus, rapid identification of bacterial and viral infections in the lab is of great importance for effective treatment especially in cases of emergency. Blood was collected from 50 patients with confirmed viral infection and 50 with confirmed bacterial infection. White blood cells (WBCs) and plasma were isolated and deposited on a zinc selenide slide, dried and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. The acquired spectra of WBCs and plasma were analyzed in order to differentiate between the two types of infections. In this study, the potential of FTIR microscopy in tandem with multivariate analysis was evaluated for the identification of the agent that causes the human infection. The method was used to identify the infectious agent type as either bacterial or viral, based on an analysis of the blood components [i.e., white blood cells (WBC) and plasma] using their infrared vibrational spectra. The time required for the analysis and evaluation after obtaining the blood sample was less than one hour. In the analysis, minute spectral differences in several bands of the FTIR spectra of WBCs were observed between groups of samples with viral and bacterial infections. By employing the techniques of feature extraction with linear discriminant analysis (LDA), a sensitivity of ~92 % and a specificity of ~86 % for an infection type diagnosis was achieved. The present preliminary study suggests that FTIR spectroscopy of WBCs is a potentially feasible and efficient tool for the diagnosis of the infection type.

Keywords: viral infection, bacterial infection, linear discriminant analysis, plasma, white blood cells, infrared spectroscopy

Procedia PDF Downloads 192
250 COVID Prevention and Working Environmental Risk Prevention and Buisness Continuety among the Sme’s in Selected Districts in Sri Lanka

Authors: Champika Amarasinghe

Abstract:

Introduction: Covid 19 pandemic was badly hit to the Sri Lankan economy during the year 2021. More than 65% of the Sri Lankan work force is engaged with small and medium scale businesses which no doubt that they had to struggle for their survival and business continuity during the pandemic. Objective: To assess the association of adherence to the new norms during the Covid 19 pandemic and maintenance of healthy working environmental conditions for business continuity. A cross sectional study was carried out to assess the OSH status and adequacy of Covid 19 preventive strategies among the 200 SME’S in selected two districts in Sri Lanka. These two districts were selected considering the highest availability of SME’s. Sample size was calculated, and probability propionate to size was used to select the SME’s which were registered with the small and medium scale development authority. An interviewer administrated questionnaire was used to collect the data, and OSH risk assessment was carried out by a team of experts to assess the OSH status in these industries. Results: According to the findings, more than 90% of the employees in these industries had a moderate awareness related to COVID 19 disease and preventive strategies such as the importance of Mask use, hand sainting practices, and distance maintenance, but the only forty percent of them were adhered to implementation of these practices. Furthermore, only thirty five percent of the employees and employers in these SME’s new the reasons behind the new norms, which may be the reason for reluctance to implement these strategies and reluctance to adhering to the new norms in this sector. The OSH risk assessment findings revealed that the working environmental organization while maintaining the distance between two employees was poor due to the inadequacy of space in these entities. More than fifty five percent of the SME’s had proper ventilation and lighting facilities. More than eighty five percent of these SME’s had poor electrical safety measures. Furthermore, eighty two percent of them had not maintained fire safety measures. Eighty five percent of them were exposed to heigh noise levels and chemicals where they were not using any personal protectives nor any other engineering controls were not imposed. Floor conditions were poor, and they were not maintaining the occupational accident nor occupational disease diseases. Conclusions: Based on the findings, proper awareness sessions were carried out by NIOSH. Six physical training sessions and continues online trainings were carried out to overcome these issues, which made a drastic change in their working environments and ended up with hundred percent implementation of the Covid 19 preventive strategies, which intern improved the worker participation in the businesses. Reduced absentees and improved business opportunities, and continued their businesses without any interruption during the third episode of Covid 19 in Sri Lanka.

Keywords: working environment, Covid 19, occupational diseases, occupational accidents

Procedia PDF Downloads 63
249 Glycosaminoglycan, a Cartilage Erosion Marker in Synovial Fluid of Osteoarthritis Patients Strongly Correlates with WOMAC Function Subscale

Authors: Priya Kulkarni, Soumya Koppikar, Narendrakumar Wagh, Dhanshri Ingle, Onkar Lande, Abhay Harsulkar

Abstract:

Cartilage is an extracellular matrix composed of aggrecan, which imparts it with a great tensile strength, stiffness and resilience. Disruption in cartilage metabolism leading to progressive degeneration is a characteristic feature of Osteoarthritis (OA). The process involves enzymatic depolymerisation of cartilage specific proteoglycan, releasing free glycosaminoglycan (GAG). This released GAG in synovial fluid (SF) of knee joint serves as a direct measure of cartilage loss, however, limited due to its invasive nature. Western Ontario and McMaster Universities Arthritis Index (WOMAC) is widely used for assessing pain, stiffness and physical-functions in OA patients. The scale is comprised of three subscales namely, pain, stiffness and physical-function, intends to measure patient’s perspective of disease severity as well as efficacy of prescribed treatment. Twenty SF samples obtained from OA patients were analysed for their GAG values in SF using DMMB based assay. LK 1.0 vernacular version was used to attain WOMAC scale. The results were evaluated using SAS University software (Edition 1.0) for statistical significance. All OA patients revealed higher GAG values compared to the control value of 78.4±30.1µg/ml (obtained from our non-OA patients). Average WOMAC calculated was 51.3 while pain, stiffness and function estimated were 9.7, 3.9 and 37.7, respectively. Interestingly, a strong statistical correlation was established between WOMAC function subscale and GAG (p = 0.0102). This subscale is based on day-to-day activities like stair-use, bending, walking, getting in/out of car, rising from bed. However, pain and stiffness subscale did not show correlation with any of the studied markers and endorsed the atypical inflammation in OA pathology. On one side, where knee pain showed poor correlation with GAG, it is often noted that radiography is insensitive to cartilage degenerative changes; thus OA remains undiagnosed for long. Moreover, active cartilage degradation phase remains elusive to both, patient and clinician. Through analysis of large number of OA patients we have established a close association of Kellgren-Lawrence grades and increased cartilage loss. A direct attempt to correlate WOMAC and radiographic progression of OA with various biomarkers has not been attempted so far. We found a good correlation in GAG levels in SF and the function subscale.

Keywords: cartilage, Glycosaminoglycan, synovial fluid, western ontario and McMaster Universities Arthritis Index

Procedia PDF Downloads 421
248 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 288