Search results for: data analyses
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26764

Search results for: data analyses

23494 Language and Study Skill Needs: A Case Study of ESP Learners at the Language Centre of Sultan Qaboos University, Oman

Authors: Ahmed Mohamed Al-Abdali

Abstract:

Providing English for Specific Purposes (ESP) courses that are more closely geared to the learners’ needs and requirements in their fields of study undoubtedly enhance learners’ interest and success in a highly academic environment. While needs analysis is crucial to the success of ESP courses, it has not received sufficient attention from researchers in the Arab world. Oman is no exception from the Arab countries as this fact is realised in the ESP practices in the Omani higher educational context. This presentation, however, discusses the perceptions of the Language Centre (LC) students at Sultan Qaboos University (SQU), Oman, in relation to the requirements of their science colleges. The discussion of the presentation will be based on a mixed-method-approach study, which included semi-structured interviews, questionnaires and document analyses. These mixed methods have allowed for closer investigation of the participants' views, backgrounds and experiences. It is hoped that the findings of this study will be used to recommend changes to the ESP curriculum in the LC of SQU so that it better meets the needs of its students and requirements of the science colleges.

Keywords: curriculum, ESP, ELT, needs analysis, college requirements

Procedia PDF Downloads 312
23493 Digital Literacy Skills for Geologist in Public Sector

Authors: Angsumalin Puntho

Abstract:

Disruptive technology has had a great influence on our everyday lives and the existence of an organization. Geologists in the public sector need to keep up with digital technology and be able to work and collaborate in a more effective manner. The result from SWOT and 7S McKinsey analyses suggest that there are inadequate IT personnel, no individual digital literacy development plan, and a misunderstanding of management policies. The Office of Civil Service Commission develops digital literacy skills that civil servants and government officers should possess in order to work effectively; it consists of nine dimensions, including computer skills, internet skills, cyber security awareness, word processing, spreadsheets, presentation programs, online collaboration, graphics editors and cyber security practices; and six steps of digital literacy development including self-assessment, individual development plan, self-learning, certified test, learning reflection, and practices. Geologists can use digital literacy as a learning tool to develop themselves for better career opportunities.

Keywords: disruptive technology, digital technology, digital literacy, computer skills

Procedia PDF Downloads 103
23492 Iterative Method for Lung Tumor Localization in 4D CT

Authors: Sarah K. Hagi, Majdi Alnowaimi

Abstract:

In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.

Keywords: automated algorithm , computed tomography, lung tumor, tumor localization

Procedia PDF Downloads 593
23491 Incorporating Anomaly Detection in a Digital Twin Scenario Using Symbolic Regression

Authors: Manuel Alves, Angelica Reis, Armindo Lobo, Valdemar Leiras

Abstract:

In industry 4.0, it is common to have a lot of sensor data. In this deluge of data, hints of possible problems are difficult to spot. The digital twin concept aims to help answer this problem, but it is mainly used as a monitoring tool to handle the visualisation of data. Failure detection is of paramount importance in any industry, and it consumes a lot of resources. Any improvement in this regard is of tangible value to the organisation. The aim of this paper is to add the ability to forecast test failures, curtailing detection times. To achieve this, several anomaly detection algorithms were compared with a symbolic regression approach. To this end, Isolation Forest, One-Class SVM and an auto-encoder have been explored. For the symbolic regression PySR library was used. The first results show that this approach is valid and can be added to the tools available in this context as a low resource anomaly detection method since, after training, the only requirement is the calculation of a polynomial, a useful feature in the digital twin context.

Keywords: anomaly detection, digital twin, industry 4.0, symbolic regression

Procedia PDF Downloads 108
23490 Re-Constructing the Research Design: Dealing with Problems and Re-Establishing the Method in User-Centered Research

Authors: Kerem Rızvanoğlu, Serhat Güney, Emre Kızılkaya, Betül Aydoğan, Ayşegül Boyalı, Onurcan Güden

Abstract:

This study addresses the re-construction and implementation process of the methodological framework developed to evaluate how locative media applications accompany the urban experiences of international students coming to Istanbul with exchange programs in 2022. The research design was built on a three-stage model. The research team conducted a qualitative questionnaire in the first stage to gain exploratory data. These data were then used to form three persona groups representing the sample by applying cluster analysis. In the second phase, a semi-structured digital diary study was carried out on a gamified task list with a sample selected from the persona groups. This stage proved to be the most difficult to obtaining valid data from the participant group. The research team re-evaluated the design of this second phase to reach the participants who will perform the tasks given by the research team while sharing their momentary city experiences, to ensure the daily data flow for two weeks, and to increase the quality of the obtained data. The final stage, which follows to elaborate on the findings, is the “Walk & Talk,” which is completed with face-to-face and in-depth interviews. It has been seen that the multiple methods used in the research process contribute to the depth and data diversity of the research conducted in the context of urban experience and locative technologies. In addition, by adapting the research design to the experiences of the users included in the sample, the differences and similarities between the initial research design and the research applied are shown.

Keywords: digital diary study, gamification, multi-model research, persona analysis, research design for urban experience, user-centered research, “Walk & Talk”

Procedia PDF Downloads 161
23489 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe

Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis

Abstract:

The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.

Keywords: Terrain Builder, WebGL, Virtual Globe, CesiumJS, Tiled Map Service, TMS, Height-Map, Regular Grid, Geovisual Analytics, DTM

Procedia PDF Downloads 414
23488 Lacustrine Sediments of the Poljanska Locality in the Miocene Climatic Optimum North Croatian Basin, Croatia

Authors: Marijan KovačIć, Davor Pavelić, Darko Tibljaš, Ivo Galić, Frane Marković, Ivica PavičIć

Abstract:

The North Croatian Basin (NCB) occupies the southwestern part of the Pannonian Basin System and belongs to the Central Paratethys realm. In a quarry near the village of Poljanska, on the southern slopes of Mt. Papuk in eastern Croatia, a 40-meter-thick section is exposed, consisting of well-bedded, mixed, carbonate-siliciclastic deposits with occurrences of pyroclastics. Sedimentological investigation indicates that a salina lake developed in the central NCB during the late early Miocene. Field studies and mineralogical and petrological analyses indicate that alternations of laminated crypto- characterize the lower part of the section to microcrystalline dolomite and analcimolite (sedimentary rocks composed essentially of authigenic analcime) associated with tuffites and marls. The pyroclastic material is a product of volcanic activity at the end of the early Miocene, while the formation of analcime, the zeolite group mineral, is a result of an alteration of pyroclastic material in an alkaline lacustrine environment. These sediments were deposited in a shallow, hydrologically closed lake that was controlled by an arid climate during the first phase of its development. The middle part of the section consists of dolomites interbedded with analcimolites and sandstones. The sandstone beds are a result of the increased supply of clastic material derived from the locally uplifted metamorphic and granitoid basement. The emplacement of sandstones and dolomites reflects a distinct alternation of hydrologically open and closed lacustrine environments controlled by the frequent alternation of humid and arid climates, representing the second phase of lake development. The siliciclastics of the third phase of lake development were deposited during the Middle Miocene in a hydrologically mostly open lake. All lacustrine deposition coincides with the Miocene Climatic Optimum, which was characterized by a hot and warm climate. The sedimentological data confirm the mostly wet conditions previously identified by paleobotanical studies in the region. The exception is the relatively long interval of arid climate in the late early Miocene that controlled the first phase of lake evolution, i.e., the salina-type lake.

Keywords: early Miocene, Pannonian basin System, pyroclastics, salina-type lake

Procedia PDF Downloads 199
23487 GRABTAXI: A Taxi Revolution in Thailand

Authors: Danuvasin Charoen

Abstract:

The study investigates the business process and business model of GRABTAXI. The paper also discusses how the company implemented strategies to gain competitive advantages. The data is derived from the analysis of secondary data and the in-depth interviews among staffs, taxi drivers, and key customers. The findings indicated that the company’s competitive advantages come from being the first mover, emphasising on the ease of use and tangible benefits of application, and using network effect strategy.

Keywords: taxi, mobile application, innovative business model, Thailand

Procedia PDF Downloads 294
23486 The Applicability of International Humanitarian Law to Non-State Actors

Authors: Yin Cheung Lam

Abstract:

In 1949, the ratification of the Geneva Conventions heralded the international community’s adoption of a new universal and non-discriminatory approach to human rights in situations of conflict. However, with the proliferation of international terrorism after the 9/11 attacks on the United States (U.S.), the international community’s uneven and contradictory implementations of international humanitarian law (IHL) questioned its agenda of universal human rights. Specifically, the derogation from IHL has never been so pronounced in the U.S. led ‘War on Terror’. While an extensive literature has ‘assessed the impact’ of the implementation of the Geneva Conventions, limited attention has been paid to interrogating the ways in which the Geneva Conventions and its resulting implementation have functioned to discursively reproduce certain understandings of human rights between states and non-state actors. Through a discursive analysis of the Geneva Conventions and the conceptualization of human rights in relation to terrorism, this thesis problematises the way in which the U.S. has understood and reproduced understandings of human rights. Using the U.S. ‘War on Terror’ as an example, it seeks to extend previous analyses of the U.S.’ practice of IHL through a qualitative discursive analysis of the human rights content that appears in the Geneva Conventions in addition to the speeches and policy documents on the ‘War on Terror’.

Keywords: discursive analysis, human rights, non-state actors, war on terror

Procedia PDF Downloads 596
23485 Influence of Extractives Leaching from Larch Wood on Durability of Semi-Transparent Oil-Based Coating during Accelerated Weathering

Authors: O. Dvorak, M. Panek, E. Oberhofnerova, I. Sterbova

Abstract:

Extractives contained in larch wood (Larix decidua, Mill.) reduce the service-life of exterior coating systems, especially transparent and semi-transparent. The aim of this work was to find out whether the initial several-week leaching of extractives from untreated wood in the exterior will positively affect the selected characteristics and the overall life of the semi-transparent oil-based coating. Samples exposed to exterior leaching for 10 or 20 weeks, and the reference samples without leaching were then treated with a coating system. Testing was performed by the method of artificial accelerated weathering in the UV chamber combined with thermal cycling during 6 weeks. The changes of colour, gloss, surface wetting, microscopic analyses of surfaces, and visual damage of paint were evaluated. Only 20-week initial leaching had a positive effect. Both to increase the color stability during aging, but also to slightly increase the overall life of the tested semi-transparent coating system on larch wood.

Keywords: larch wood, coating, durability. extractives

Procedia PDF Downloads 123
23484 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 25
23483 Geographic Information System Application for Predicting Tourism Development in Gunungkidul Regency, Indonesia

Authors: Nindyo Cahyo Kresnanto, Muhamad Willdan, Wika Harisa Putri

Abstract:

Gunungkidul is one of the emerging tourism industry areas in Yogyakarta Province, Indonesia. This article describes how GIS can predict the development of tourism potential in Gunungkidul. The tourism sector in Gunungkidul Regency contributes 3.34% of the total gross regional domestic product and is the economic sector with the highest growth with a percentage of 18.37% in the post-Covid-19 period. This contribution makes researchers consider that several tourist sites need to be explored more to increase regional economic development gradually. This research starts by collecting spatial data from tourist locations tourists want to visit in Gunungkidul Regency based on survey data from 571 respondents. Then the data is visualized with ArcGIS software. This research shows an overview of tourist destinations interested in travellers depicted from the lowest to the highest from the data visualization. Based on the data visualization results, specific tourist locations potentially developed to influence the surrounding economy positively. The visualization of the data displayed is also in the form of a desire line map that shows tourist travel patterns from the origin of the tourist to the destination of the tourist location of interest. From the desire line, the prediction of the path of tourist sites with a high frequency of transportation activity can figure out. Predictions regarding specific tourist location routes that high transportation activities can burden can consider which routes will be chosen. The route also needs to be improved in terms of capacity and quality. The goal is to provide a sense of security and comfort for tourists who drive and positively impact the tourist sites traversed by the route.

Keywords: tourism development, GIS and survey, transportation, potential desire line

Procedia PDF Downloads 58
23482 Nepal Himalaya: Status of Women, Politics, and Administration

Authors: Tulasi Acharya

Abstract:

The paper is a qualitative analysis of status of women and women in politics and administration in Nepal Himalaya. The paper reviews data of women in civil service and in administrative levels. Looking at the Nepali politics and administration from the social constructivist perspective, the paper highlights some social and cultural issues that have othered women as “second sex.” As the country is heading towards modernity, gender friendly approaches are being instituted. Although the data reflects on the progress on women’s status and on women’s political and administrative participation, the data is not enough to predict the democratic gender practices in political and administrative levels. The political and administrative culture of Nepal Himalaya should be changed by promoting gender practices and deconstructing gender images in administrative culture through representative bureaucracy and by introducing democratic policies.

Keywords: politics, policy, administration, culture, women, Nepal, democracy

Procedia PDF Downloads 522
23481 Data Structure Learning Platform to Aid in Higher Education IT Courses (DSLEP)

Authors: Estevan B. Costa, Armando M. Toda, Marcell A. A. Mesquita, Jacques D. Brancher

Abstract:

The advances in technology in the last five years allowed an improvement in the educational area, as the increasing in the development of educational software. One of the techniques that emerged in this lapse is called Gamification, which is the utilization of video game mechanics outside its bounds. Recent studies involving this technique provided positive results in the application of these concepts in many areas as marketing, health and education. In the last area there are studies that cover from elementary to higher education, with many variations to adequate to the educators methodologies. Among higher education, focusing on IT courses, data structures are an important subject taught in many of these courses, as they are base for many systems. Based on the exposed this paper exposes the development of an interactive web learning environment, called DSLEP (Data Structure Learning Platform), to aid students in higher education IT courses. The system includes basic concepts seen on this subject such as stacks, queues, lists, arrays, trees and was implemented to ease the insertion of new structures. It was also implemented with gamification concepts, such as points, levels, and leader boards, to engage students in the search for knowledge and stimulate self-learning.

Keywords: gamification, Interactive learning environment, data structures, e-learning

Procedia PDF Downloads 481
23480 Hidden Markov Movement Modelling with Irregular Data

Authors: Victoria Goodall, Paul Fatti, Norman Owen-Smith

Abstract:

Hidden Markov Models have become popular for the analysis of animal tracking data. These models are being used to model the movements of a variety of species in many areas around the world. A common assumption of the model is that the observations need to have regular time steps. In many ecological studies, this will not be the case. The objective of the research is to modify the movement model to allow for irregularly spaced locations and investigate the effect on the inferences which can be made about the latent states. A modification of the likelihood function to allow for these irregular spaced locations is investigated, without using interpolation or averaging the movement rate. The suitability of the modification is investigated using GPS tracking data for lion (Panthera leo) in South Africa, with many observations obtained during the night, and few observations during the day. Many nocturnal predator tracking studies are set up in this way, to obtain many locations at night when the animal is most active and is difficult to observe. Few observations are obtained during the day, when the animal is expected to rest and is potentially easier to observe. Modifying the likelihood function allows the popular Hidden Markov Model framework to be used to model these irregular spaced locations, making use of all the observed data.

Keywords: hidden Markov Models, irregular observations, animal movement modelling, nocturnal predator

Procedia PDF Downloads 238
23479 A Review of Lortie’s Schoolteacher

Authors: Tsai-Hsiu Lin

Abstract:

Dan C. Lortie’s Schoolteacher: A sociological study is one of the best works on the sociology of teaching since W. Waller’s classic study. It is a book worthy of review. Following the tradition of symbolic interactionists, Lortie demonstrated the qualities who studied the occupation of teaching. Using several methods to gather effective data, Lortie has portrayed the ethos of the teaching profession. Therefore, the work is an important book on the teaching profession and teacher culture. Though outstanding, Lortie’s work is also flawed in that his perspectives and methodology were adopted largely from symbolic interactionism. First, Lortie in his work analyzed many points regarding teacher culture; for example, he was interested in exploring “sentiment,” “cathexis,” and “ethos.” Thus, he was more a psychologist than a sociologist. Second, symbolic interactionism led him to discern the teacher culture from a micro view, thereby missing the structural aspects. For example, he did not fully discuss the issue of gender and he ignored the issue of race. Finally, following the qualitative sociological tradition, Lortie employed many qualitative methods to gather data but only foucused on obtaining and presenting interview data. Moreover, he used measurement methods that were too simplistic for analyzing quantitative data fully.

Keywords: education reform, teacher culture, teaching profession, Lortie’s Schoolteacher

Procedia PDF Downloads 219
23478 Urban Areas Management in Developing Countries: Analysis of the Urban Areas Crossed with Risk of Storm Water Drains, Aswan-Egypt

Authors: Omar Hamdy, Schichen Zhao, Hussein Abd El-Atty, Ayman Ragab, Muhammad Salem

Abstract:

One of the most risky areas in Aswan is Abouelreesh, which is suffering from flood disasters, as heavy deluge inundates urban areas causing considerable damage to buildings and infrastructure. Moreover, the main problem was the urban sprawl towards this risky area. This paper aims to identify the urban areas located in the risk areas prone to flash floods. Analyzing this phenomenon needs a lot of data to ensure satisfactory results; however, in this case the official data and field data were limited, and therefore, free sources of satellite data were used. This paper used ArcGIS tools to obtain the storm water drains network by analyzing DEM files. Additionally, historical imagery in Google Earth was studied to determine the age of each building. The last step was to overlay the urban area layer and the storm water drains layer to identify the vulnerable areas. The results of this study would be helpful to urban planners and government officials to make the disasters risk estimation and develop primary plans to recover the risky area, especially urban areas located in torrents.

Keywords: risk area, DEM, storm water drains, GIS

Procedia PDF Downloads 446
23477 Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain

Authors: W. S. Besbas, M. A. Artemi, R. M. Salman

Abstract:

Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images.

Keywords: Content Based Image Retrieval (CBIR), face sketch image retrieval, features selection for CBIR, image retrieval in transform domain

Procedia PDF Downloads 479
23476 Socio-Demographic Predictors of Divorce Adjustment in Pakistani Women

Authors: Rukhsana Kausar, Nida Zafar

Abstract:

The present research investigated socio-demographic predictors of divorce adjustment in Pakistani women. The sample comprised of 80 divorced women from different areas of Lahore. Self developed Socio-Demographic predictor scale and Divorce Adjustment Scale by (Fisher, 2001) was used for assessment. Analyses showed that working divorced women living with joint family system are more adjusted as compared to non-working divorced women living with joint family system. Women having one child are more adjusted as compared to women having more than one child. Findings highlight importance of presence of father for healthy development of adolescents. Adjustment of divorcee women was positively associated with income, social support from the family, having favorable attitudes toward marital dissolution prior to divorce, and being the partner who initiated the divorce. In addition, older women showed some evidence of poorer adjustment than did younger women. Findings highlight importance of support for divorce adjustment.

Keywords: socio-demographic, adjustment, women, divorce

Procedia PDF Downloads 455
23475 Data-Driven Approach to Predict Inpatient's Estimated Discharge Date

Authors: Ayliana Dharmawan, Heng Yong Sheng, Zhang Xiaojin, Tan Thai Lian

Abstract:

To facilitate discharge planning, doctors are presently required to assign an Estimated Discharge Date (EDD) for each patient admitted to the hospital. This assignment of the EDD is largely based on the doctor’s judgment. This can be difficult for cases which are complex or relatively new to the doctor. It is hypothesized that a data-driven approach would be able to facilitate the doctors to make accurate estimations of the discharge date. Making use of routinely collected data on inpatient discharges between January 2013 and May 2016, a predictive model was developed using machine learning techniques to predict the Length of Stay (and hence the EDD) of inpatients, at the point of admission. The predictive performance of the model was compared to that of the clinicians using accuracy measures. Overall, the best performing model was found to be able to predict EDD with an accuracy improvement in Average Squared Error (ASE) by -38% as compared to the first EDD determined by the present method. It was found that important predictors of the EDD include the provisional diagnosis code, patient’s age, attending doctor at admission, medical specialty at admission, accommodation type, and the mean length of stay of the patient in the past year. The predictive model can be used as a tool to accurately predict the EDD.

Keywords: inpatient, estimated discharge date, EDD, prediction, data-driven

Procedia PDF Downloads 161
23474 A Method to Estimate Wheat Yield Using Landsat Data

Authors: Zama Mahmood

Abstract:

The increasing demand of food management, monitoring of the crop growth and forecasting its yield well before harvest is very important. These days, yield assessment together with monitoring of crop development and its growth are being identified with the help of satellite and remote sensing images. Studies using remote sensing data along with field survey validation reported high correlation between vegetation indices and yield. With the development of remote sensing technique, the detection of crop and its mechanism using remote sensing data on regional or global scales have become popular topics in remote sensing applications. Punjab, specially the southern Punjab region is extremely favourable for wheat production. But measuring the exact amount of wheat production is a tedious job for the farmers and workers using traditional ground based measurements. However, remote sensing can provide the most real time information. In this study, using the Normalized Differentiate Vegetation Index (NDVI) indicator developed from Landsat satellite images, the yield of wheat has been estimated during the season of 2013-2014 for the agricultural area around Bahawalpur. The average yield of the wheat was found 35 kg/acre by analysing field survey data. The field survey data is in fair agreement with the NDVI values extracted from Landsat images. A correlation between wheat production (ton) and number of wheat pixels has also been calculated which is in proportional pattern with each other. Also a strong correlation between the NDVI and wheat area was found (R2=0.71) which represents the effectiveness of the remote sensing tools for crop monitoring and production estimation.

Keywords: landsat, NDVI, remote sensing, satellite images, yield

Procedia PDF Downloads 321
23473 Rubber Crumbs in Alkali Activated Clay Roof Tiles at Low Temperature

Authors: Aswin Kumar Krishnan, Yat Choy Wong, Reiza Mukhlis, Zipeng Zhang, Arul Arulrajah

Abstract:

The continuous increase in vehicle uptake escalates the number of rubber tyre waste which need to be managed to avoid landfilling and stockpiling. The present research focused on the sustainable use of rubber crumbs in clay roof tiles. The properties of roof tiles composed of clay, rubber crumbs, NaOH, and Na₂SiO₃ with a 10% alkaline activator were studied. Tile samples were fabricated by heating the compacted mixtures at 50°C for 72 hours, followed by a higher heating temperature of 200°C for 24 hours. The effect of rubber crumbs aggregates as a substitution for the raw clay materials was investigated by varying their concentration from 0% to 2.5%. X-ray diffraction (XRD) and scanning electron microscopy (SEM) analyses have been conducted to study the phases and microstructures of the samples. It was found that the optimum rubber crumbs concentration was at 0.5% and 1%, while cracks and larger porosity were found at higher crumbs concentrations. Water absorption and compressive strength test results demonstrated that rubber crumbs and clay satisfied the standard requirement for the roof tiles.

Keywords: rubber crumbs, clay, roof tiles, alkaline activators

Procedia PDF Downloads 92
23472 Synthesis, Characterization and in vitro DNA Binding and Cleavage Studies of Cu(II)/Zn(II) Dipeptide Complexes

Authors: A. Jamsheera, F. Arjmand, D. K. Mohapatra

Abstract:

Small molecules binding to specific sites along DNA molecule are considered as potential chemotherapeutic agents. Their role as mediators of key biological functions and their unique intrinsic properties make them particularly attractive therapeutic agents. Keeping in view, novel dipeptide complexes Cu(II)-Val-Pro (1), Zn(II)-Val-Pro (2), Cu(II)-Ala-Pro (3) and Zn(II)-Ala-Pro (4) were synthesized and thoroughly characterized using different spectroscopic techniques including elemental analyses, IR, NMR, ESI–MS and molar conductance measurements. The solution stability study carried out by UV–vis absorption titration over a broad range of pH proved the stability of the complexes in solution. In vitro DNA binding studies of complexes 1–4 carried out employing absorption, fluorescence, circular dichroism and viscometric studies revealed the binding of complexes to DNA via groove binding. UV–vis titrations of 1–4 with mononucleotides of interest viz., 5´-GMP and 5´-TMP were also carried out. The DNA cleavage activity of the complexes 1 and 2 were ascertained by gel electrophoresis assay which revealed that the complexes are good DNA cleavage agents and the cleavage mechanism involved a hydrolytic pathway. Furthermore, in vitro antitumor activity of complex 1 was screened against human cancer cell lines of different histological origin.

Keywords: dipeptide Cu(II) and Zn(II) complexes, DNA binding profile, pBR322 DNA cleavage, in vitro anticancer activity

Procedia PDF Downloads 335
23471 Happiness Levels and Factors Affect Happiness in Thailand: A Comparative Study of 4 Periods

Authors: Kalayanee Senasu

Abstract:

Research on happiness has been growing in recent decades. In the early stages, scholars were primarily concerned with establishing the validity of happiness measures and with exploring socio-economic correlates of happiness. More recent studies have focused on outcomes of happiness as well as the identification of happiness policies. This research investigates the happiness levels and influences of quality of life in terms of mental health satisfaction, family satisfaction, community satisfaction, and work satisfaction as determinants of happiness in Thailand during 2009-2014. The data collected by the National Statistic Office of Thailand in the project of Socio-economic Survey inclusion of Mental Health Survey in 2009, 2010, and 2012; and in the project of Labor Force Survey inclusion of Mental Health Survey in August 2014 were employed. There was a total of 59,430, 64,720, 54,736, and 9,997 respondents who were at least 15 years old in the survey during 2009-2014. Statistical analyses include both descriptive and inferential statistics. All research hypotheses were tested by means of hierarchical regression analysis. The analysis results reveal that happiness means during the studied period are quite at high levels (in the range of 7.42 to 7.60 from the scale 0-10). And the results indicate that all model variables (i.e., mental health satisfaction, family satisfaction, community satisfaction, and work satisfaction), have positive effects on happiness in Thailand. Additionally, the mental health satisfaction plays the most important role in predicting happiness. Further, our results indicate significant positive relationship between education, and income/expense and happiness, while other socio-economic variables reveal variety relationships during the studied period. Our results not only validate research findings in other countries but also verify the importance of quality of life (in terms of mental health satisfaction, family satisfaction, community satisfaction, and work satisfaction) as important factors of happiness for public policy makers. One conclusion stands firm in our study: happiness can be advanced in many ways. At the society level, greater happiness for people can be achieved by policies that aim to promote good health, an engaged family relationship, a high community as well as work qualities. A contented population is advantaged in many ways over one that is not. Government or policy makers should understand and realize that happiness is a valuable and tangible aspect of the population for which they are responsible. Therefore, they should include happiness issues in their political agenda.

Keywords: community satisfaction, family satisfaction, mental health satisfaction, work satisfaction, happiness, Thailand

Procedia PDF Downloads 318
23470 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile

Procedia PDF Downloads 155
23469 Potential of Detailed Environmental Data, Produced by Information and Communication Technology Tools, for Better Consideration of Microclimatology Issues in Urban Planning to Promote Active Mobility

Authors: Živa Ravnikar, Alfonso Bahillo Martinez, Barbara Goličnik Marušić

Abstract:

Climate change mitigation has been formally adopted and announced by countries over the globe, where cities are targeting carbon neutrality through various more or less successful, systematic, and fragmentary actions. The article is based on the fact that environmental conditions affect human comfort and the usage of space. Urban planning can, with its sustainable solutions, not only support climate mitigation in terms of a planet reduction of global warming but as well enabling natural processes that in the immediate vicinity produce environmental conditions that encourage people to walk or cycle. However, the article draws attention to the importance of integrating climate consideration into urban planning, where detailed environmental data play a key role, enabling urban planners to improve or monitor environmental conditions on cycle paths. In a practical aspect, this paper tests a particular ICT tool, a prototype used for environmental data. Data gathering was performed along the cycling lanes in Ljubljana (Slovenia), where the main objective was to assess the tool's data applicable value within the planning of comfortable cycling lanes. The results suggest that such transportable devices for in-situ measurements can help a researcher interpret detailed environmental information, characterized by fine granularity and precise data spatial and temporal resolution. Data can be interpreted within human comfort zones, where graphical representation is in the form of a map, enabling the link of the environmental conditions with a spatial context. The paper also provides preliminary results in terms of the potential of such tools for identifying the correlations between environmental conditions and different spatial settings, which can help urban planners to prioritize interventions in places. The paper contributes to multidisciplinary approaches as it demonstrates the usefulness of such fine-grained data for better consideration of microclimatology in urban planning, which is a prerequisite for creating climate-comfortable cycling lanes promoting active mobility.

Keywords: information and communication technology tools, urban planning, human comfort, microclimate, cycling lanes

Procedia PDF Downloads 123
23468 Mechanistic Modelling to De-risk Process Scale-up

Authors: Edwin Cartledge, Jack Clark, Mazaher Molaei-Chalchooghi

Abstract:

The mixing in the crystallization step of active pharmaceutical ingredient manufacturers was studied via advanced modeling tools to enable a successful scale-up. A virtual representation of the vessel was created, and computational fluid dynamics were used to simulate multiphase flow and, thus, the mixing environment within this vessel. The study identified a significant dead zone in the vessel underneath the impeller and found that increasing the impeller speed and power did not improve the mixing. A series of sensitivity analyses found that to improve mixing, the vessel had to be redesigned, and found that optimal mixing could be obtained by adding two extra cylindrical baffles. The same two baffles from the simulated environment were then constructed and added to the process vessel. By identifying these potential issues before starting the manufacture and modifying the vessel to ensure good mixing, this study mitigated a failed crystallization and potential batch disposal, which could have resulted in a significant loss of high-value material.

Keywords: active pharmaceutical ingredient, baffles, computational fluid dynamics, mixing, modelling

Procedia PDF Downloads 86
23467 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 126
23466 From Text to Data: Sentiment Analysis of Presidential Election Political Forums

Authors: Sergio V Davalos, Alison L. Watkins

Abstract:

User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.

Keywords: sentiment analysis, text mining, user generated content, US presidential elections

Procedia PDF Downloads 177
23465 CVOIP-FRU: Comprehensive VoIP Forensics Report Utility

Authors: Alejandro Villegas, Cihan Varol

Abstract:

Voice over Internet Protocol (VoIP) products is an emerging technology that can contain forensically important information for a criminal activity. Without having the user name and passwords, this forensically important information can still be gathered by the investigators. Although there are a few VoIP forensic investigative applications available in the literature, most of them are particularly designed to collect evidence from the Skype product. Therefore, in order to assist law enforcement with collecting forensically important information from variety of Betamax VoIP tools, CVOIP-FRU framework is developed. CVOIP-FRU provides a data gathering solution that retrieves usernames, contact lists, as well as call and SMS logs from Betamax VoIP products. It is a scripting utility that searches for data within the registry, logs and the user roaming profiles in Windows and Mac OSX operating systems. Subsequently, it parses the output into readable text and html formats. One superior way of CVOIP-FRU compared to the other applications that due to intelligent data filtering capabilities and cross platform scripting back end of CVOIP-FRU, it is expandable to include other VoIP solutions as well. Overall, this paper reveals the exploratory analysis performed in order to find the key data paths and locations, the development stages of the framework, and the empirical testing and quality assurance of CVOIP-FRU.

Keywords: betamax, digital forensics, report utility, VoIP, VoIPBuster, VoIPWise

Procedia PDF Downloads 284