Search results for: healthcare data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25027

Search results for: healthcare data

21847 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 202
21846 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 84
21845 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 88
21844 The Developing of Teaching Materials Online for Students in Thailand

Authors: Pitimanus Bunlue

Abstract:

The objectives of this study were to identify the unique characteristics of Salaya Old market, Phutthamonthon, Nakhon Pathom and develop the effective video media to promote the homeland awareness among local people and the characteristic features of this community were collectively summarized based on historical data, community observation, and people’s interview. The acquired data were used to develop a media describing prominent features of the community. The quality of the media was later assessed by interviewing local people in the old market in terms of content accuracy, video, and narration qualities, and sense of homeland awareness after watching the video. The result shows a 6-minute video media containing historical data and outstanding features of this community was developed. Based on the interview, the content accuracy was good. The picture quality and the narration were very good. Most people developed a sense of homeland awareness after watching the video also as well.

Keywords: audio-visual, creating homeland awareness, Phutthamonthon Nakhon Pathom, research and development

Procedia PDF Downloads 272
21843 A Decision Support System for the Detection of Illicit Substance Production Sites

Authors: Krystian Chachula, Robert Nowak

Abstract:

Manufacturing home-made explosives and synthetic drugs is an increasing problem in Europe. To combat that, a data fusion system is proposed for the detection and localization of production sites in urban environments. The data consists of measurements of properties of wastewater performed by various sensors installed in a sewage network. A four-stage fusion strategy allows detecting sources of waste products from known chemical reactions. First, suspicious measurements are used to compute the amount and position of discharged compounds. Then, this information is propagated through the sewage network to account for missing sensors. The next step is clustering and the formation of tracks. Eventually, tracks are used to reconstruct discharge events. Sensor measurements are simulated by a subsystem based on real-world data. In this paper, different discharge scenarios are considered to show how the parameters of used algorithms affect the effectiveness of the proposed system. This research is a part of the SYSTEM project (SYnergy of integrated Sensors and Technologies for urban sEcured environMent).

Keywords: continuous monitoring, information fusion and sensors, internet of things, multisensor fusion

Procedia PDF Downloads 93
21842 Implementation of CNV-CH Algorithm Using Map-Reduce Approach

Authors: Aishik Deb, Rituparna Sinha

Abstract:

We have developed an algorithm to detect the abnormal segment/"structural variation in the genome across a number of samples. We have worked on simulated as well as real data from the BAM Files and have designed a segmentation algorithm where abnormal segments are detected. This algorithm aims to improve the accuracy and performance of the existing CNV-CH algorithm. The next-generation sequencing (NGS) approach is very fast and can generate large sequences in a reasonable time. So the huge volume of sequence information gives rise to the need for Big Data and parallel approaches of segmentation. Therefore, we have designed a map-reduce approach for the existing CNV-CH algorithm where a large amount of sequence data can be segmented and structural variations in the human genome can be detected. We have compared the efficiency of the traditional and map-reduce algorithms with respect to precision, sensitivity, and F-Score. The advantages of using our algorithm are that it is fast and has better accuracy. This algorithm can be applied to detect structural variations within a genome, which in turn can be used to detect various genetic disorders such as cancer, etc. The defects may be caused by new mutations or changes to the DNA and generally result in abnormally high or low base coverage and quantification values.

Keywords: cancer detection, convex hull segmentation, map reduce, next generation sequencing

Procedia PDF Downloads 107
21841 Inferring Human Mobility in India Using Machine Learning

Authors: Asra Yousuf, Ajaykumar Tannirkulum

Abstract:

Inferring rural-urban migration trends can help design effective policies that promote better urban planning and rural development. In this paper, we describe how machine learning algorithms can be applied to predict internal migration decisions of people. We consider data collected from household surveys in Tamil Nadu to train our model. To measure the performance of the model, we use data on past migration from National Sample Survey Organisation of India. The factors for training the model include socioeconomic characteristic of each individual like age, gender, place of residence, outstanding loans, strength of the household, etc. and his past migration history. We perform a comparative analysis of the performance of a number of machine learning algorithm to determine their prediction accuracy. Our results show that machine learning algorithms provide a stronger prediction accuracy as compared to statistical models. Our goal through this research is to propose the use of data science techniques in understanding human decisions and behaviour in developing countries.

Keywords: development, migration, internal migration, machine learning, prediction

Procedia PDF Downloads 250
21840 Temperament as a Success Determinant in Formative Assessment

Authors: George Fomunyam Kehdinga

Abstract:

Assessment is a vital part of the educational process, and formative assessment is a way of ensuring that higher education achieves the desired effects. Different factors influence how students perform in assessments in general, and formative assessment in particular and temperament is one of such determining factors. This paper which is a qualitative case study of four universities in four different countries examines how the temperamental make up of students either empowers them to perform excellently in formative assessment or incapacitates their performance. These four universities were chosen from Cameroon, South Africa, United Kingdom and the United States of America and three students were chosen from each institution, six of which were undergraduate student and six postgraduate students. Data in this paper was generated through qualitative interviews and document analyses which was preceded by a temperament test. From the data generated, it was discovered that cholerics who are natural leaders, hence do not struggle to express themselves often perform excellently in formative assessment while sanguines on the other hand who are also extroverts like cholerics perform relatively well. Phlegmatics and melancholics performed averagely and poorly respectively in formative assessment because they are naturally prone to fear and hate such activities because they like keeping to themselves. The paper, therefore, suggest that temperament is a success determinant in formative assessment. It also proposes that lecturers need and understanding of temperaments to be able to fully administer formative assessment in the lecturer room. It also suggests that assessment should be balance in the classroom so that some students because of their temperamental make-up are not naturally disadvantaged while others are performing excellently. Lastly, the paper suggests that since formative assessment is a process of generating data, it should be contextualised or given and individualised approach so as to ensure that trustworthy data is generated.

Keywords: temperament, formative assessment, academic success, students

Procedia PDF Downloads 232
21839 Effect of Measured and Calculated Static Torque on Instantaneous Torque Profile of Switched Reluctance Motor

Authors: Ali Asghar Memon

Abstract:

The simulation modeling of switched reluctance (SR) machine often relies and uses the three data tables identified as static torque characteristics that include flux linkage characteristics, co energy characteristics and static torque characteristics separately. It has been noticed from the literature that the data of static torque used in the simulation model is often calculated so far the literature is concerned. This paper presents the simulation model that include the data of measured and calculated static torque separately to see its effect on instantaneous torque profile of the machine. This is probably for the first time so far the literature review is concerned that static torque from co energy information, and measured static torque directly from experiments are separately used in the model. This research is helpful for accurate modeling of switched reluctance drive.

Keywords: static characteristics, current chopping, flux linkage characteristics, switched reluctance motor

Procedia PDF Downloads 269
21838 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 191
21837 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective

Authors: S. Fantin

Abstract:

This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.

Keywords: cybersecurity, data protection, European Union, Japan

Procedia PDF Downloads 97
21836 The Causality between Corruption and Economic Growth in MENA Countries: A Dynamic Panel-Data Analysis

Authors: Nour Mohamad Fayad

Abstract:

Complex and extensively researched, the impact of corruption on economic growth seems to be intricate. Many experts believe that corruption reduces economic development. However, counterarguments have suggested that corruption either promotes growth and development or has no significant impact on economic performance. Clearly, there is no consensus in the economics literature regarding the possible relationship between corruption and economic development. Corruption's complex and clandestine nature, which makes it difficult to define and measure, is one of the obstacles that must be overcome when investigating its effect on an economy. In an attempt to contribute to the ongoing debate, this study examines the impact of corruption on economic growth in the Middle East and North Africa (MENA) region between 2000 and 2021 using a Customized Corruption Index-CCI and panel data on MENA countries. These countries were selected because they are understudied in the economic literature, and despite the World Bank's recent emphasis on corruption in the developing world, the MENA countries have received little attention. The researcher used Cobb-Douglas functional form to test corruption in MENA using a customized index known as Customized Corruption Index-CCI to track corruption over almost 20 years, then used the dynamic panel data. The findings indicate that there is a positive correlation between corruption and economic growth, but this is not consistent across all MENA nations. First, the relatively recent lack of data from MENA nations. This issue is related to the inaccessibility of data for many MENA countries, particularly regarding the returns on resources, private malfeasance, and other variables in Gulf countries. In addition, the researcher encountered several restrictions, such as electricity and internet outages, due to the fact that he is from Lebanon, a country whose citizens have endured difficult living conditions since the Lebanese crisis began in 2019. Demonstrating a customized index known as Customized Corruption Index-CCI that suits the characteristics of MENA countries to peculiarly measure corruption in this region, the outcome of the Customized Corruption Index-CCI is then compared to the Corruption Perception Index-CPI and Control of Corruption from World Governance Indicator-CC from WGI.

Keywords: corruption, economic growth, corruption measurements, empirical review, impact of corruption

Procedia PDF Downloads 48
21835 Evaluation of Illegal Hunting of Red Deer and Conservation Policy of Department of Environment in Iran

Authors: Tahere Fazilat

Abstract:

Caspian red deer or maral (Cervus elaphus maral) is the largest type of deer in iran. Maral in the past has lived in the north forests of Iran from the Caspian sea coast, Alborz mountains chain and oak forest of Zagros margin from the Azarbaijan up to fars province. However, the generation of them was completely destroyed in the north west and west of Iran. According to reports about 50 years and out of reach of humans. In the present studies, data were collected from 2004 to 2014 in the Mazandaran state Hyrcanian forest by means of guard of environment and justiciary office of department of environment of Mazandaran in this process the all arrested illegal hunting of red deer and the population census, estimation and the correlation of these data was assayed. We provide a first evaluation of how suitable these methods are by comparing the results with population estimates obtained using cohort analysis, and by analyzing the within-season variation in number of seen deer. The data gave us the future of red deer in northern forest of Iran and the results of policy of department of environment in Iran in red deer conservation.

Keywords: illegal hunting, red deer, census, concervation

Procedia PDF Downloads 531
21834 Research on Straightening Process Model Based on Iteration and Self-Learning

Authors: Hong Lu, Xiong Xiao

Abstract:

Shaft parts are widely used in machinery industry, however, bending deformation often occurred when this kind of parts is being heat treated. This parts needs to be straightened to meet the requirement of straightness. As for the pressure straightening process, a good straightening stroke algorithm is related to the precision and efficiency of straightening process. In this paper, the relationship between straightening load and deflection during the straightening process is analyzed, and the mathematical model of the straightening process has been established. By the mathematical model, the iterative method is used to solve the straightening stroke. Compared to the traditional straightening stroke algorithm, straightening stroke calculated by this method is much more precise; because it can adapt to the change of material performance parameters. Considering that the straightening method is widely used in the mass production of the shaft parts, knowledge base is used to store the data of the straightening process, and a straightening stroke algorithm based on empirical data is set up. In this paper, the straightening process control model which combine the straightening stroke method based on iteration and straightening stroke algorithm based on empirical data has been set up. Finally, an experiment has been designed to verify the straightening process control model.

Keywords: straightness, straightening stroke, deflection, shaft parts

Procedia PDF Downloads 305
21833 Effects of Elastic, Plyometric and Strength Training on Selected Anaerobic Factors in Sanandaj Elite Volleyball Players

Authors: Majed Zobairy, Fardin Kalvandi, Kamal Azizbaigi

Abstract:

This research was carried out for evaluation of elastic, plyometric and resistance training on selected anaerobic factors in men volleyball players. For these reason 30 elite volleyball players of Sanandaj city randomly divided into 3 groups as follow: elastic training, plyometric training and resistance training. Pre-exercise tests which include vertical jumping, 50 yard speed running and scat test were done and data were recorded. Specific exercise protocol regimen was done for each group and then post-exercise tests again were done. Data analysis showed that there were significant increases in exercise test in each group. One way ANOVA analysis showed that increases in speed records in elastic group were significantly higher than the other groups (p<0/05),based on research data it seems that elastic training can be a useful method and new approach in improving functional test and training regimen.

Keywords: elastic training, plyometric training, strength training, anaerobic power

Procedia PDF Downloads 498
21832 Integrating Data Envelopment Analysis and Variance Inflation Factor to Measure the Efficiency of Decision Making Units

Authors: Mostafa Kazemi, Zahra N. Farkhani

Abstract:

This paper proposes an integrated Data Envelopment Analysis (DEA) and Variance Inflation Factor (VIF) model for measuring the technical efficiency of decision making units. The model is validated using a set of 69% sales representatives’ dairy products. The analysis is done in two stages, in the first stage, VIF technique is used to distinguish independent effective factors of resellers, and in the second stage we used DEA for measuring efficiency for both constant and variable return to scales status. Further DEA is used to examine the utilization of environmental factors on efficiency. Results of this paper indicated an average managerial efficiency of 83% in the whole sales representatives’ dairy products. In addition, technical and scale efficiency were counted 96% and 80% respectively. 38% of sales representative have the technical efficiency of 100% and 72% of the sales representative in terms of managerial efficiency are quite efficient.High levels of relative efficiency indicate a good condition for sales representative efficiency.

Keywords: data envelopment analysis (DEA), relative efficiency, sales representatives’ dairy products, variance inflation factor (VIF)

Procedia PDF Downloads 532
21831 Circular Polarized and Surface Compatible Microstrip Array Antenna Design for Image and Telemetric Data Transfer in UAV and Armed UAV Systems

Authors: Kübra Taşkıran, Bahattin Türetken

Abstract:

In this paper, a microstrip array antenna with circular polarization at 2.4 GHz frequency has been designed using the in order to provide image and telemetric data transmission in Unmanned Aerial Vehicle and Armed Unmanned Aerial Vehicle Systems. In addition to the antenna design, the power divider design was made and the antennas were fed in phase. As a result of the analysis, it was observed that the antenna operates at a frequency of 2.4016 GHz with 12.2 dBi directing gain. In addition, this designed array antenna was transformed into a form compatible with the rocket surface used in A-UAV Systems, and analyzes were made. As a result of these analyzes, it has been observed that the antenna operates on the surface of the missile at a frequency of 2.372 GHz with a directivity gain of 10.2 dBi.

Keywords: cicrostrip array antenna, circular polarization, 2.4 GHz, image and telemetric data, transmission, surface compatible, UAV and armed UAV

Procedia PDF Downloads 69
21830 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 70
21829 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: frailty model, latent variables, liver cirrhosis, parametric distribution

Procedia PDF Downloads 234
21828 Spectrofluorometric Studies on the Interactions of Bovine Serum Albumin with Dimeric Cationic Surfactants

Authors: Srishti Sinha, Deepti Tikariha, Kallol K. Ghosh

Abstract:

Over the past few decades protein-surfactant interactions have been a subject of extensive studies as they are of great importance in wide variety of industries, biological, pharmaceutical and cosmetic systems. Protein-surfactant interactions have been explored the effect of surfactants on structure of protein in the form of solubilization and denaturing or renaturing of protein. Globular proteins are frequently used as functional ingredients in healthcare and pharmaceutical products, due to their ability to catalyze biochemical reactions, to be adsorbed on the surface of some substance and to bind other moieties and form molecular aggregates. One of the most widely used globular protein is bovine serum albumin (BSA), since it has a well-known primary structure and been associated with the binding of many different categories of molecules, such as dyes, drugs and toxic chemicals. Protein−surfactant interactions are usually dependent on the surfactant features. Most of the research has been focused on single-chain surfactants. More recently, the binding between proteins and dimeric surfactants has been discussed. In present study interactions of one dimeric surfactant Butanediyl-1,4-bis (dimethylhexadecylammonium bromide) (16-4-16, 2Br-) and the corresponding single-chain surfactant cetyl trimethylammonium bromide (CTAB) with bovine serum albumin (BSA) have been investigated by surface tension and spectrofluoremetric methods. It has been found that the bindings of all gemini surfactant to BSA were cooperatively driven by electrostatic and hydrophobic interactions. The gemini surfactant carrying more charges and hydrophobic tails, showed stronger interactions with BSA than the single-chain surfactant.

Keywords: bovine serum albumin, gemini surfactants, hydrophobic interactions, protein surfactant interaction

Procedia PDF Downloads 485
21827 Tourism Area Development Optimation Based on Solar-Generated Renewable Energy Technology at Karimunjawa, Central Java Province, Indonesia

Authors: Yanuar Tri Wahyu Saputra, Ramadhani Pamapta Putra

Abstract:

Karimunjawa is one among Indonesian islands which is lacking of electricity supply. Despite condition above, Karimunjawa is an important tourism object in Indonesia's Central Java Province. Solar Power Plant is a potential technology to be applied in Karimunjawa, in order to fulfill the island's electrical supply need and to increase daily life and tourism quality among tourists and local population. This optimation modeling of Karimunjawa uses HOMER software program. The data we uses include wind speed data in Karimunjawa from BMKG (Indonesian Agency for Meteorology, Climatology and Geophysics), annual weather data in Karimunjawa from NASA, electricity requirements assumption data based on number of houses and business infrastructures in Karimunjawa. This modeling aims to choose which three system categories offer the highest financial profit with the lowest total Net Present Cost (NPC). The first category uses only PV with 8000 kW of electrical power and NPC value of $6.830.701. The second category uses hybrid system which involves both 1000 kW PV and 100 kW generator which results in total NPC of $6.865.590. The last category uses only generator with 750 kW of electrical power that results in total NPC of $ 16.368.197, the highest total NPC among the three categories. Based on the analysis above, we can conclude that the most optimal way to fulfill the electricity needs in Karimunjawa is to use 8000 kW PV with lower maintenance cost.

Keywords: Karimunjawa, renewable energy, solar power plant, HOMER

Procedia PDF Downloads 443
21826 Artificial Neural Network in Predicting the Soil Response in the Discrete Element Method Simulation

Authors: Zhaofeng Li, Jun Kang Chow, Yu-Hsing Wang

Abstract:

This paper attempts to bridge the soil properties and the mechanical response of soil in the discrete element method (DEM) simulation. The artificial neural network (ANN) was therefore adopted, aiming to reproduce the stress-strain-volumetric response when soil properties are given. 31 biaxial shearing tests with varying soil parameters (e.g., initial void ratio and interparticle friction coefficient) were generated using the DEM simulations. Based on these 45 sets of training data, a three-layer neural network was established which can output the entire stress-strain-volumetric curve during the shearing process from the input soil parameters. Beyond the training data, 2 additional sets of data were generated to examine the validity of the network, and the stress-strain-volumetric curves for both cases were well reproduced using this network. Overall, the ANN was found promising in predicting the soil behavior and reducing repetitive simulation work.

Keywords: artificial neural network, discrete element method, soil properties, stress-strain-volumetric response

Procedia PDF Downloads 370
21825 Digital Mapping as a Tool for Finding Cities' DNA

Authors: Sanja Peter

Abstract:

Transformation of urban environments can be compared to evolutionary processes. Systematic digital mapping of historical data can enable capturing some of these processes and their outcomes. For example, it may help reveal the structure of a city’s historical DNA. Gathering historical data for automatic processing may be giving a basis for cultural algorithms. Gothenburg City museum is trying to make city’s heritage information accessible through GIS-platforms and is now partnering with academic institutions to find appropriate methods to make accessible the knowledge on the city’s historical fabric. Hopefully, this will be carried out through a project called Digital Twin Cities. One part of this large project, concerning matters of Cultural Heritage, will be in collaboration with Chalmers University of Technology. The aim is to create a layered map showing historical developments of the city and extracting quantitative data about its built heritage, above and below the earth. It will allow interpreting the information from historic maps through, for example, names of the streets/places, geography, structural changes in urban fabric and information gathered by archaeologists’ excavations. Through the study of these geographical, historical and local metamorphoses, urban environment will reveal its metaphorical DNA or its MEM (Dawkins).

Keywords: Gothenburg, mapping, cultural heritage, city history

Procedia PDF Downloads 115
21824 Theoretical Studies on the Formation Constant, Geometry, Vibrational Frequencies and Electronic Properties Dinuclear Molybdenum Complexes

Authors: Mahboobeh Mohadeszadeh, Behzad Padidaran Moghaddam

Abstract:

In order to measuring dinuclear molybdenum complexes formation constant First,the reactants and the products were optimized separately and then, their frequencies were measured. In next level , with using Hartree-fock (HF) and density functional theory (DFT) methods ,Theoretical studies on the geometrical parameters, electronic properties and vibrational frequencies of dinuclear molybdenum complexes [C40H44Mo2N2O20] were investigated . These calculations were performed with the B3LYP, BPV86, B3PW91 and HF theoretical method using the LANL2DZ (for Mo’s) + 6-311G (for others) basis sets. To estimate the error rate between theoretical data and experimental data, RSquare , SError and RMS values that according with the theoretical and experimental parameters found out DFT methods has more integration with experimental data compare to HF methods. In addition, through electron specification of compounds, the percentage of atomic orbital’s attendance in making molecular orbital’s, atoms electrical charge, the sustainable energy resulting and also HOMO and LUMO orbital’s energy achieved.

Keywords: geometrical parameters, hydrogen bonding, electronic properties, vibrational frequencies

Procedia PDF Downloads 240
21823 Estimation of Human Absorbed Dose Using Compartmental Model

Authors: M. Mousavi-Daramoroudi, H. Yousefnia, F. Abbasi-Davani, S. Zolghadri

Abstract:

Dosimetry is an indispensable and precious factor in patient treatment planning to minimize the absorbed dose in vital tissues. In this study, compartmental model was used in order to estimate the human absorbed dose of 177Lu-DOTATOC from the biodistribution data in wild type rats. For this purpose, 177Lu-DOTATOC was prepared under optimized conditions and its biodistribution was studied in male Syrian rats up to 168 h. Compartmental model was applied to mathematical description of the drug behaviour in tissue at different times. Dosimetric estimation of the complex was performed using radiation absorbed dose assessment resource (RADAR). The biodistribution data showed high accumulation in the adrenal and pancreas as the major expression sites for somatostatin receptor (SSTR). While kidneys as the major route of excretion receive 0.037 mSv/MBq, pancreas and adrenal also obtain 0.039 and 0.028 mSv/MBq. Due to the usage of this method, the points of accumulated activity data were enhanced, and further information of tissues uptake was collected that it will be followed by high (or improved) precision in dosimetric calculations.

Keywords: compartmental modeling, human absorbed dose, ¹⁷⁷Lu-DOTATOC, Syrian rats

Procedia PDF Downloads 175
21822 Extracting Attributes for Twitter Hashtag Communities

Authors: Ashwaq Alsulami, Jianhua Shao

Abstract:

Various organisations often need to understand discussions on social media, such as what trending topics are and characteristics of the people engaged in the discussion. A number of approaches have been proposed to extract attributes that would characterise a discussion group. However, these approaches are largely based on supervised learning, and as such they require a large amount of labelled data. We propose an approach in this paper that does not require labelled data, but rely on lexical sources to detect meaningful attributes for online discussion groups. Our findings show an acceptable level of accuracy in detecting attributes for Twitter discussion groups.

Keywords: attributed community, attribute detection, community, social network

Procedia PDF Downloads 131
21821 A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery

Authors: Yongquan Zhao, Bo Huang

Abstract:

Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery.

Keywords: hybrid spatial-temporal-spectral fusion, high resolution synthetic imagery, least square regression, sparse representation, spectral transformation

Procedia PDF Downloads 210
21820 Using Building Information Modelling to Mitigate Risks Associated with Health and Safety in the Construction and Maintenance of Infrastructure Assets

Authors: Mohammed Muzafar, Darshan Ruikar

Abstract:

BIM, an acronym for Building Information Modelling relates to the practice of creating a computer generated model which is capable of displaying the planning, design, construction and operation of a structure. The resulting simulation is a data-rich, object-oriented, intelligent and parametric digital representation of the facility, from which views and data, appropriate to various users needs can be extracted and analysed to generate information that can be used to make decisions and to improve the process of delivering the facility. BIM also refers to a shift in culture that will influence the way the built environment and infrastructure operates and how it is delivered. One of the main issues of concern in the construction industry at present in the UK is its record on Health & Safety (H&S). It is, therefore, important that new technologies such as BIM are developed to help improve the quality of health and safety. Historically the H&S record of the construction industry in the UK is relatively poor as compared to the manufacturing industries. BIM and the digital environment it operates within now allow us to use design and construction data in a more intelligent way. It allows data generated by the design process to be re-purposed and contribute to improving efficiencies in other areas of a project. This evolutionary step in design is not only creating exciting opportunities for the designers themselves but it is also creating opportunity for every stakeholder in any given project. From designers, engineers, contractors through to H&S managers, BIM is accelerating a cultural change. The paper introduces the concept behind a research project that mitigates the H&S risks associated with the construction, operation and maintenance of assets through the adoption of BIM.

Keywords: building information modeling, BIM levels, health, safety, integration

Procedia PDF Downloads 227
21819 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera

Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl

Abstract:

Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The RMSE between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.

Keywords: neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition

Procedia PDF Downloads 88
21818 A Study on Sentiment Analysis Using Various ML/NLP Models on Historical Data of Indian Leaders

Authors: Sarthak Deshpande, Akshay Patil, Pradip Pandhare, Nikhil Wankhede, Rushali Deshmukh

Abstract:

Among the highly significant duties for any language most effective is the sentiment analysis, which is also a key area of NLP, that recently made impressive strides. There are several models and datasets available for those tasks in popular and commonly used languages like English, Russian, and Spanish. While sentiment analysis research is performed extensively, however it is lagging behind for the regional languages having few resources such as Hindi, Marathi. Marathi is one of the languages that included in the Indian Constitution’s 8th schedule and is the third most widely spoken language in the country and primarily spoken in the Deccan region, which encompasses Maharashtra and Goa. There isn’t sufficient study on sentiment analysis methods based on Marathi text due to lack of available resources, information. Therefore, this project proposes the use of different ML/NLP models for the analysis of Marathi data from the comments below YouTube content, tweets or Instagram posts. We aim to achieve a short and precise analysis and summary of the related data using our dataset (Dates, names, root words) and lexicons to locate exact information.

Keywords: multilingual sentiment analysis, Marathi, natural language processing, text summarization, lexicon-based approaches

Procedia PDF Downloads 41