Search results for: geospatial data science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26074

Search results for: geospatial data science

24934 Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction

Authors: Sajid Abas, Jon Pyo Hong, Jung-Ryun Le, Seungryong Cho

Abstract:

Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.

Keywords: computed tomography, computed laminography, compressive sending, low-dose

Procedia PDF Downloads 450
24933 Breaking the Stained-Glass Ceiling: Personality Traits and Ambivalent Sexism in Shaping Gender Income Equality

Authors: Shiza Shahid, Saba Shahid, Kenji Noguchi, Raegan Bishop, Elena Stepanova

Abstract:

According to data from the U.S. Census Bureau, in 2020, in the United States, women who worked full-time earned only 82 cents for every dollar earned by men who worked full-time, year-round. This study examined how personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) interacts with ambivalent sexism to influence acceptance of gender income inequality. Using a quantitative method approach, this study collected data from a sample of N=150 students from Social Science Online Subject Pool (SONA). The study predicted that (a) extraversion and openness to experience would be positively related to acceptance of gender income inequality, while emotional stability and agreeableness would be negatively related to acceptance of gender income inequality, (b) Individuals who scored higher on measures of hostile sexism would show greater acceptance of gender income inequality than individuals who score higher on measures of benevolent sexism. The results were reported according to the predictions for the study. This study broadens the importance of addressing the underlying factors contributing to attitudes towards gender income inequality and contributes to ongoing efforts to achieve gender equality, which is important for promoting economic well-being.

Keywords: gender income ineqaulity, ambivalent sexism, personality traits, sustainable development goals

Procedia PDF Downloads 40
24932 Fuzzy Wavelet Model to Forecast the Exchange Rate of IDR/USD

Authors: Tri Wijayanti Septiarini, Agus Maman Abadi, Muhammad Rifki Taufik

Abstract:

The exchange rate of IDR/USD can be the indicator to analysis Indonesian economy. The exchange rate as a important factor because it has big effect in Indonesian economy overall. So, it needs the analysis data of exchange rate. There is decomposition data of exchange rate of IDR/USD to be frequency and time. It can help the government to monitor the Indonesian economy. This method is very effective to identify the case, have high accurate result and have simple structure. In this paper, data of exchange rate that used is weekly data from December 17, 2010 until November 11, 2014.

Keywords: the exchange rate, fuzzy mamdani, discrete wavelet transforms, fuzzy wavelet

Procedia PDF Downloads 546
24931 Use of Machine Learning in Data Quality Assessment

Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho

Abstract:

Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.

Keywords: machine learning, data quality, quality dimension, quality assessment

Procedia PDF Downloads 132
24930 The Comparison Study of Methanol and Water Extract of Chuanxiong Rhizoma: A Fingerprint Analysis

Authors: Li Chun Zhao, Zhi Chao Hu, Xi Qiang Liu, Man Lai Lee, Chak Shing Yeung, Man Fei Xu, Yuen Yee Kwan, Alan H. M. Ho, Nickie W. K. Chan, Bin Deng, Zhong Zhen Zhao, Min Xu

Abstract:

Background: Chuangxiong Rhizoma (Chuangxion, CX) is one of the most frequently used herbs in Chinese medicine because of its wide therapeutic effects such as vasorelaxation and anti-inflammation. Aim: The purposes of this study are (1) to perform non-targeted / targeted analyses of CX methanol extract and water extract, and compare the present data with previously LC-MS or GC-MS fingerprints; (2) to examine the difference between CX methanol extract and water extract for preliminarily evaluating whether current compound markers of methanol extract from crude CX materials could be suitable for quality control of CX water extract. Method: CX methanol extract was prepared according to the Hong Kong Chinese Materia Medica Standards. DG water extract was prepared by boiling with pure water for three times (one hour each). UHPLC-Q-TOF-MS/MS fingerprint analysis was performed by C18 column (1.7 µm, 2.1 × 100 mm) with Agilent 1290 Infinity system. Experimental data were analyzed by Agilent MassHunter Software. A database was established based on 13 published LC-MS and GC-MS CX fingerprint analyses. Total 18 targeted compounds in database were selected as markers to compare present data with previous data, and these markers also used to compare CX methanol extract and water extract. Result: (1) Non-targeted analysis indicated that there were 133 compounds identified in CX methanol extract, while 325 compounds in CX water extract that was more than double of CX methanol extract. (2) Targeted analysis further indicated that 9 in 18 targeted compounds were identified in CX methanol extract, while 12 in 18 targeted compounds in CX water extract that showed a lower lose-rate of water extract when compared with methanol extract. (3) By comparing CX methanol extract and water extract, Senkyunolide A (+1578%), Ferulic acid (+529%) and Senkyunolide H (+169%) were significantly higher in water extract when compared with methanol extract. (4) Other bioactive compounds such as Tetramethylpyrazine were only found in CX water extract. Conclusion: Many new compounds in both CX methanol and water extracts were found by using UHPLC Q-TOF MS/MS analysis when compared with previous published reports. A new standard reference including non-targeted compound profiling and targeted markers functioned especially for quality control of CX water extract (herbal decoction) should be established in future. (This project was supported by Hong Kong Baptist University (FRG2/14-15/109) & Natural Science Foundation of Guangdong Province (2014A030313414)).

Keywords: Chuanxiong rhizoma, fingerprint analysis, targeted analysis, quality control

Procedia PDF Downloads 480
24929 The Impact of Two Factors on EFL Learners' Fluency

Authors: Alireza Behfar, Mohammad Mahdavi

Abstract:

Nowadays, in the light of progress in the world of science, technology and communications, mastery of learning international languages is a sure and needful matter. In learning any language as a second language, progress and achieving a desirable level in speaking is indeed important for approximately all learners. In this research, we find out how preparation can influence L2 learners' oral fluency with respect to individual differences in working memory capacity. The participants consisted of sixty-one advanced L2 learners including MA students of TEFL at Isfahan University as well as instructors teaching English at Sadr Institute in Isfahan. The data collection consisted of two phases: A working memory test (reading span test) and a picture description task, with a one-month interval between the two tasks. Speaking was elicited through speech generation task in which the individuals were asked to discuss four topics emerging in two pairs. The two pairs included one simple and one complex topic and was accompanied by planning time and without any planning time respectively. Each topic was accompanied by several relevant pictures. L2 fluency was assessed based on preparation. The data were then analyzed in terms of the number of syllables, the number of silent pauses, and the mean length of pauses produced per minute. The study offers implications for strategies to improve learners’ both fluency and working memory.

Keywords: two factors, fluency, working memory capacity, preparation, L2 speech production reading span test picture description

Procedia PDF Downloads 211
24928 Nuclear Decay Data Evaluation for 217Po

Authors: S. S. Nafee, A. M. Al-Ramady, S. A. Shaheen

Abstract:

Evaluated nuclear decay data for the 217Po nuclide ispresented in the present work. These data include recommended values for the half-life T1/2, α-, β--, and γ-ray emission energies and probabilities. Decay data from 221Rn α and 217Bi β—decays are presented. Q(α) has been updated based on the recent published work of the Atomic Mass Evaluation AME2012. In addition, the logft values were calculated using the Logft program from the ENSDF evaluation package. Moreover, the total internal conversion electrons has been calculated using Bricc program. Meanwhile, recommendation values or the multi-polarities have been assigned based on recently measurement yield a better intensity balance at the 254 keV and 264 keV gamma transitions.

Keywords: nuclear decay data evaluation, mass evaluation, total converison coefficients, atomic mass evaluation

Procedia PDF Downloads 415
24927 Implementation of an Undergraduate Integrated Biology and Chemistry Course

Authors: Jayson G. Balansag

Abstract:

An integrated biology and chemistry (iBC) course for freshmen college students was developed in University of Delaware. This course will prepare students to (1) become interdisciplinary thinkers in the field of biology and (2) collaboratively work with others from multiple disciplines in the future. This paper documents and describes the implementation of the course. The information gathered from reading literature, classroom observations, and interviews were used to carry out the purpose of this paper. The major goal of the iBC course is to align the concepts between Biology and Chemistry, so that students can draw science concepts from both disciplines which they can apply in their interdisciplinary researches. This course is offered every fall and spring semesters of each school year. Students enrolled in Biology are also enrolled in Chemistry during the same semester. The iBC is composed of lectures, laboratories, studio sessions, and workshops and is taught by the faculty from the biology and chemistry departments. In addition, the preceptors, graduate teaching assistants, and studio fellows facilitate the laboratory and studio sessions. These roles are interdependent with each other. The iBC can be used as a model for higher education institutions who wish to implement an integrated biology course.

Keywords: integrated biology and chemistry, integration, interdisciplinary research, new biology, undergraduate science education

Procedia PDF Downloads 225
24926 Geographic Information System Using Google Fusion Table Technology for the Delivery of Disease Data Information

Authors: I. Nyoman Mahayasa Adiputra

Abstract:

Data in the field of health can be useful for the purposes of data analysis, one example of health data is disease data. Disease data is usually in a geographical plot in accordance with the area. Where the data was collected, in the city of Denpasar, Bali. Disease data report is still published in tabular form, disease information has not been mapped in GIS form. In this research, disease information in Denpasar city will be digitized in the form of a geographic information system with the smallest administrative area in the form of district. Denpasar City consists of 4 districts of North Denpasar, East Denpasar, West Denpasar and South Denpasar. In this research, we use Google fusion table technology for map digitization process, where this technology can facilitate from the administrator and from the recipient information. From the administrator side of the input disease, data can be done easily and quickly. From the receiving end of the information, the resulting GIS application can be published in a website-based application so that it can be accessed anywhere and anytime. In general, the results obtained in this study, divided into two, namely: (1) Geolocation of Denpasar and all of Denpasar districts, the process of digitizing the map of Denpasar city produces a polygon geolocation of each - district of Denpasar city. These results can be utilized in subsequent GIS studies if you want to use the same administrative area. (2) Dengue fever mapping in 2014 and 2015. Disease data used in this study is dengue fever case data taken in 2014 and 2015. Data taken from the profile report Denpasar Health Department 2015 and 2016. This mapping can be useful for the analysis of the spread of dengue hemorrhagic fever in the city of Denpasar.

Keywords: geographic information system, Google fusion table technology, delivery of disease data information, Denpasar city

Procedia PDF Downloads 113
24925 Inclusive Practices in Health Sciences: Equity Proofing Higher Education Programs

Authors: Mitzi S. Brammer

Abstract:

Given that the cultural make-up of programs of study in institutions of higher learning is becoming increasingly diverse, much has been written about cultural diversity from a university-level perspective. However, there are little data in the way of specific programs and how they address inclusive practices when teaching and working with marginalized populations. This research study aimed to discover baseline knowledge and attitudes of health sciences faculty, instructional staff, and students related to inclusive teaching/learning and interactions. Quantitative data were collected via an anonymous online survey (one designed for students and another designed for faculty/instructional staff) using a web-based program called Qualtrics. Quantitative data were analyzed amongst the faculty/instructional staff and students, respectively, using descriptive and comparative statistics (t-tests). Additionally, some participants voluntarily engaged in a focus group discussion in which qualitative data were collected around these same variables. Collecting qualitative data to triangulate the quantitative data added trustworthiness to the overall data. The research team analyzed collected data and compared identified categories and trends, comparing those data between faculty/staff and students, and reported results as well as implications for future study and professional practice.

Keywords: inclusion, higher education, pedagogy, equity, diversity

Procedia PDF Downloads 47
24924 An Analysis of Sequential Pattern Mining on Databases Using Approximate Sequential Patterns

Authors: J. Suneetha, Vijayalaxmi

Abstract:

Sequential Pattern Mining involves applying data mining methods to large data repositories to extract usage patterns. Sequential pattern mining methodologies used to analyze the data and identify patterns. The patterns have been used to implement efficient systems can recommend on previously observed patterns, in making predictions, improve usability of systems, detecting events, and in general help in making strategic product decisions. In this paper, identified performance of approximate sequential pattern mining defines as identifying patterns approximately shared with many sequences. Approximate sequential patterns can effectively summarize and represent the databases by identifying the underlying trends in the data. Conducting an extensive and systematic performance over synthetic and real data. The results demonstrate that ApproxMAP effective and scalable in mining large sequences databases with long patterns.

Keywords: multiple data, performance analysis, sequential pattern, sequence database scalability

Procedia PDF Downloads 317
24923 Medical Knowledge Management since the Integration of Heterogeneous Data until the Knowledge Exploitation in a Decision-Making System

Authors: Nadjat Zerf Boudjettou, Fahima Nader, Rachid Chalal

Abstract:

Knowledge management is to acquire and represent knowledge relevant to a domain, a task or a specific organization in order to facilitate access, reuse and evolution. This usually means building, maintaining and evolving an explicit representation of knowledge. The next step is to provide access to that knowledge, that is to say, the spread in order to enable effective use. Knowledge management in the medical field aims to improve the performance of the medical organization by allowing individuals in the care facility (doctors, nurses, paramedics, etc.) to capture, share and apply collective knowledge in order to make optimal decisions in real time. In this paper, we propose a knowledge management approach based on integration technique of heterogeneous data in the medical field by creating a data warehouse, a technique of extracting knowledge from medical data by choosing a technique of data mining, and finally an exploitation technique of that knowledge in a case-based reasoning system.

Keywords: data warehouse, data mining, knowledge discovery in database, KDD, medical knowledge management, Bayesian networks

Procedia PDF Downloads 376
24922 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 65
24921 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction

Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour

Abstract:

In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.

Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift

Procedia PDF Downloads 302
24920 Science of Social Work: Recognizing Its Existence as a Scientific Discipline by a Method Triangulation

Authors: Sandra Mendes

Abstract:

Social Work has encountered over time with multivariate requests in the field of its action, provisioning frameworks of knowledge and praxis. Over the years, we have observed a transformation of society and, consequently, of the public who deals with the social work practitioners. Both, training and profession have had need to adapt and readapt the ways of doing, bailing up theories to action, while action unfolds emancipation of new theories. The theoretical questioning of this subject lies on classical authors from social sciences, and contemporary authors of Social Work. In fact, both enhance, in the design of social work, an integration and social cohesion function, creating a culture of action and theory, attributing to its method a relevant function, which shall be promoter of social changes in various dimensions of both individual and collective life, as well as scientific knowledge. On the other hand, it is assumed that Social Work, through its professionalism and through the academy, is now closer to distinguish itself from other Social Sciences as an autonomous scientific field, being, however, in the center of power struggles. This paper seeks to fill the gap in social work literature about the study of the scientific field of this area of knowledge.

Keywords: field theory, knowledge, science, social work

Procedia PDF Downloads 328
24919 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company

Authors: Rahma Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS

Procedia PDF Downloads 186
24918 Importance of Ethics in Cloud Security

Authors: Pallavi Malhotra

Abstract:

This paper examines the importance of ethics in cloud computing. In the modern society, cloud computing is offering individuals and businesses an unlimited space for storing and processing data or information. Most of the data and information stored in the cloud by various users such as banks, doctors, architects, engineers, lawyers, consulting firms, and financial institutions among others require a high level of confidentiality and safeguard. Cloud computing offers centralized storage and processing of data, and this has immensely contributed to the growth of businesses and improved sharing of information over the internet. However, the accessibility and management of data and servers by a third party raise concerns regarding the privacy of clients’ information and the possible manipulations of the data by third parties. This document suggests the approaches various stakeholders should take to address various ethical issues involving cloud-computing services. Ethical education and training is key to all stakeholders involved in the handling of data and information stored or being processed in the cloud.

Keywords: IT ethics, cloud computing technology, cloud privacy and security, ethical education

Procedia PDF Downloads 309
24917 Estimating Big Five Personality Expressions with a Tiered Information Framework

Authors: Laura Kahn, Paul Rodrigues, Onur Savas, Shannon Hahn

Abstract:

An empirical understanding of an individual's personality expression can have a profound impact on organizations seeking to strengthen team performance and improve employee retention. A team's personality composition can impact overall performance. Creating a tiered information framework that leverages proxies for a user's social context and lexical and linguistic content provides insight into location-specific personality expression. We leverage the layered framework to examine domain-specific, psychological, and lexical cues within social media posts. We apply DistilBERT natural language transfer learning models with real world data to examine the relationship between Big Five personality expressions of people in Science, Technology, Engineering and Math (STEM) fields.

Keywords: big five, personality expression, social media analysis, workforce development

Procedia PDF Downloads 124
24916 Evaluation of Practicality of On-Demand Bus Using Actual Taxi-Use Data through Exhaustive Simulations

Authors: Jun-ichi Ochiai, Itsuki Noda, Ryo Kanamori, Keiji Hirata, Hitoshi Matsubara, Hideyuki Nakashima

Abstract:

We conducted exhaustive simulations for data assimilation and evaluation of service quality for various setting in a new shared transportation system, called SAVS. Computational social simulation is a key technology to design recent social services like SAVS as new transportation service. One open issue in SAVS was to determine the service scale through the social simulation. Using our exhaustive simulation framework, OACIS, we did data-assimilation and evaluation of effects of SAVS based on actual tax-use data at Tajimi city, Japan. Finally, we get the conditions to realize the new service in a reasonable service quality.

Keywords: on-demand bus sytem, social simulation, data assimilation, exhaustive simulation

Procedia PDF Downloads 297
24915 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 267
24914 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials

Authors: Rajesh Kumar G

Abstract:

A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.

Keywords: adaptive design, simulation, borrowing data, bayesian model

Procedia PDF Downloads 57
24913 Analyzing Test Data Generation Techniques Using Evolutionary Algorithms

Authors: Arslan Ellahi, Syed Amjad Hussain

Abstract:

Software Testing is a vital process in software development life cycle. We can attain the quality of software after passing it through software testing phase. We have tried to find out automatic test data generation techniques that are a key research area of software testing to achieve test automation that can eventually decrease testing time. In this paper, we review some of the approaches presented in the literature which use evolutionary search based algorithms like Genetic Algorithm, Particle Swarm Optimization (PSO), etc. to validate the test data generation process. We also look into the quality of test data generation which increases or decreases the efficiency of testing. We have proposed test data generation techniques for model-based testing. We have worked on tuning and fitness function of PSO algorithm.

Keywords: search based, evolutionary algorithm, particle swarm optimization, genetic algorithm, test data generation

Procedia PDF Downloads 169
24912 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: climate, reanalysis, renewable energy, solar radiation

Procedia PDF Downloads 192
24911 Encounter of Muslim World with Western Social Sciences: Reception, Indigenization, Islamization

Authors: Mohammad Hossein Panahi

Abstract:

Modern social sciences developed in Western Europe, and from there, it disseminated to the rest of the world, including Muslim World. Within the hierarchical world social science system that emerged in the 19th and 20th centuries, the West occupied the center, and the Third and Muslim Worlds fell into its periphery. Many social scientists, especially sociologists, in the Third and Muslim World since the 1970s have criticized this worldwide unequal division of scientific labor and have called for the development of independent/indigenous social sciences relevant to their own social conditions. Based on the conceptual framework of the World Social Science System, this paper studied the encounter of Muslim social scientists/sociologists with the Western social sciences. Using inductive thematic content analysis as the method of research, the author analyzed 32 purposefully selected articles from among over 500 collected articles from the 1970s to 2018 and categorized the obtained themes. The findings revealed three main types of encounters: reception, indigenization, and Islamization. ‘Reception’ refers to the encounter of those Muslim social scientists who embrace the positivist approach and believe that Western social sciences are valid and applicable worldwide, including the Muslim World. ‘Indigenization’ refers to the approach of those Muslim social scientists who, along with many critical Third World social scientists, reject the universality of Western social sciences and call for the development of indigenous social sciences. ‘Islamization’ refers to the position of those religious Muslim social scientists who believe that Muslim nations should Islamize social sciences based on the Islamic value and knowledge systems, in order to attain viable social sciences and free themselves from Western domination. Discussing these encounters, their supporters and opponents, the paper concludes that despite various efforts, none of the two alternatives to the Western social sciences have been able to replace it so far.

Keywords: indigenization, Islamization, Muslim world, social sciences, world social science system

Procedia PDF Downloads 116
24910 Whether Chaos Theory Could Reconstruct the Ancient Societies

Authors: Zahra Kouzehgari

Abstract:

Since the early emergence of chaos theory in the 1970s in mathematics and physical science, it has increasingly been developed and adapted in social sciences as well. The non-linear and dynamic characteristics of the theory make it a useful conceptual framework to interpret the complex social systems behavior. Regarding chaotic approach principals, sensitivity to initial conditions, dynamic adoption, strange attractors and unpredictability this paper aims to examine whether chaos approach could interpret the ancient social changes. To do this, at first, a brief history of the chaos theory, its development and application in social science as well as the principals making the theory, then its application in archaeological since has been reviewed. The study demonstrates that although based on existing archaeological records reconstruct the whole social system of the human past, the non-linear approaches in studying social complex systems would be of a great help in finding general order of the ancient societies and would enable us to shed light on some of the social phenomena in the human history or to make sense of them.

Keywords: archaeology, non-linear approach, chaos theory, ancient social systems

Procedia PDF Downloads 262
24909 Tracing Economic Policies to Ancient Indian Economic Thought

Authors: Satish Y. Deodhar

Abstract:

Science without history is like a man without memory. The colossal history of India stores many ideas on economic ethics and public policy, which have been forgotten in the course of time. This paper is an attempt to bring to the fore contributions from ancient Indian treatises. In this context, the paper briefly summarizes alternative economic ideas such as communism, capitalism, and the holistic approach of ancient Indian writings. Thereafter, the idea of a welfare brick for an individual consisting of three dimensions -Purusharthas, Ashramas, and Varnas is discussed. Given the contours of the welfare brick, the concept of the state, its economic policies, markets, prices, interest rates, and credit are covered next. This is followed by delving into the treatment of land, property rights, guilds, and labour relations. The penultimate section summarises the economic advice offered to the head of a household in the treatise Shukranitisara. Finally, in concluding comments, the relevance of ancient Indian writings for modern times is discussed -both for pedagogy and economic policies.

Keywords: ancient Indian treatises, history of economic thought, science of political economy, Sanskrit

Procedia PDF Downloads 64
24908 Using Science, Technology, Engineering, Art and Mathematics (STEAM) Project-Based Learning Programs to Transition towards Whole School Pedagogical Shift

Authors: M. Richichi

Abstract:

Evidencing the learning and developmental needs of students in specific educational institutions is central to determining the type of whole school pedagogical shift required. Initiating this transition by designing and implementing STEAM (Science, technology, engineering, art, and mathematics) project-based learning opportunities, in collaboration with industry, exposes teachers to new pedagogical and assessment practices. This experience instills confidence and a renewed sense of energy, which contributes to greater efficacy. Championing teachers in such learning environments leads to “bleeding” of inventive pedagogical understanding and skills as well as motivation. This contributes positively to collective teacher efficacy and the transition towards more cross-disciplinary initiatives and opportunities, and hence an innovative pedagogical shift. Evidence of skill and knowledge development in students, combined with greater confidence, work ethic and interest in STEAM areas, are further indicators of the success of the transitioning process.

Keywords: efficacy, pedagogy, transition, STEAM

Procedia PDF Downloads 113
24907 Thai Student Ability on Speexx Language Training Program

Authors: Toby Gibbs, Glen Craigie, Suwaree Yordchim

Abstract:

Using the Speexx Online Language Training Program with Thai students has allowed us to evaluate their learning comprehension and track their progression through the English language program. Speexx sets the standard for excellence and innovation in web-based language training and online coaching services. The program is designed to improve the business communication skills of language learners for Thai students. Speexx consists of English lessons, exercises, tests, web boards, and supplementary lessons to help students practice English. The sample groups are 191 Thai sophomores studying Business English with the department of Humanities and Social Science. The data was received by standard deviation (S.D.) value from questionnaires and samples provided from the Speexx training program. The results found that most Thai sophomores fail the Speexx training program due to their learning comprehension of the English language is below average. With persisted efforts on new training methods, the success of the Speexx Language Training Program can break through the cultural barriers and help future students adopt English as a second language. The Speexx results revealed four main factors affecting the success as follows: 1) Future English training should be pursued in applied Speexx development. 2) Thai students didn’t see the benefit of having an Online Language Training Program. 3) There is a great need to educate the next generation of learners on the benefits of Speexx within the community. 4) A great majority of Thai Sophomores didn't know what Speexx was. A guideline for self-reliance planning consisted of four aspects: 1) Development planning: by arranging groups to further improve English abilities with the Speexx Language Training program and encourage using Speexx every day. Local communities need to develop awareness of the usefulness of Speexx and share the value of using the program among family and friends. 2) Humanities and Social Science staff should develop skills using this Online Language Training Program to expand on the benefits of Speexx within their departments. 3) Further research should be pursued on the Thai Students progression with Speexx and how it helps them improve their language skills with Business English. 4) University’s and Language centers should focus on using Speexx to encourage learning for any language, not just English.

Keywords: ability, comprehension, sophomore, speexx

Procedia PDF Downloads 356
24906 The Construct of Assessment Instrument for Value, Attitude and Professionalism among Students Faculty of Sports Science and Coaching

Authors: Ahmad Hashim, Thariq Khan Azizuddin Khan, Zulakbal Abd Karim, Nohazira Abdul Karim

Abstract:

This research aims to obtain the validity and reliability of a survey instrument to evaluate the values, attitudes, and professionalism of sports science students, from the Faculty of Sports Science and Coaching, Universiti Pendidikan Sultan Idris (UPSI). It is a survey which is divided into two components namely first; moral, self-esteem, proactive, self-reliant and voluntary and second; ethics and professionalism. Development of the survey instrument is based on the Malaysian Education Development Plan, Higher Education Malaysia. There are 50 items prepared based on the five-point Likert scale which were tested at the pilot test level. It involved 212 research subjects selected based on random sampling. In addition, the research method applied is in the form of pre-experimental one group pre-test-post-test. Results of the analysis showed that overall field expert validity is r = .89, while the Cronbach alpha reliability correlation value of outdoor education instrument evaluation survey is r = .85. Next, this survey was tested again for construct validity using the factor analysis method for statistical analysis which would validate each item tested was supposed to be in the right component. From the analysis, results show that Bartlett's test is significant p < .05 and Kaiser-Meyer-Olkin index range is r = .87. The result showed 39 survey items are produced out of 50 items of the survey based on this factor analysis method. Research has shown that the survey instrument developed is valid and reliable to be used for the Faculty of Sports Sciences and Coaching, UPSI.

Keywords: values, attitudes, professionalism, ethics, professionalism

Procedia PDF Downloads 174
24905 Cadmium Concentrations in Breast Milk and Factors of Exposition: Systematic Review

Authors: Abha Cherkani Hassani, Imane Ghanname, Nezha Mouane

Abstract:

Background: This is the first systematic review summarizing 43 years of research from 36 countries in the assessment of cadmium in breast milk; a suitable matrix in human biomonitoring. Objectives: To report from the published literature the levels of cadmium in breast milk and the affecting factors causing the increase of cadmium concentrations; also to gather several quantitative data which might be useful to evaluate the international degrees of maternal and infant exposure. Methods: We reviewed the literature for studies reporting quantitative data about cadmium levels in human breast milk in the world that have been published between 1971 and 2014 and that are available on Pubmed, Science direct and Google scholar. The aim of the study, country, period of samples collection, size of samples, sampling method, time of lactation, mother’s age, area of residence, cadmium concentration and other information were extracted. Results: 67 studies were selected and included in this systematic review. Some concentrations greatly exceed the limit of the WHO, However about 50% of the studies had less than 1 µg/l cadmium concentration (the recommendation of the WHO); as well many factors have shown their implication in breast milk contamination by Cadmium as lactation stage, smoking, diet, supplement intake, interaction with other mineral elements, age of mothers, parity and other parameters. Conclusion: Breast milk is a pathway of maternal excretion of cadmium. It is also a biological indicator of the degree of environmental pollution and cadmium exposure of the lactating women and the nourished infant. Therefore preventive measures and continuous monitoring are necessary.

Keywords: breast milk, cadmium level, factors, systematic review

Procedia PDF Downloads 503