Search results for: panel data analysis
40560 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services
Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme
Abstract:
Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing
Procedia PDF Downloads 11340559 Study on Pedestrian Street Reconstruction under Comfortable Continuous View: Take the Walking Streets of Zhengzhou City as an Example
Authors: Liu Mingxin
Abstract:
Streets act as the organizers of each image element on the urban spatial route, and the spatial continuity of urban streets is the basis for people to perceive the overall image of the city. This paper takes the walking space of Zhengzhou city as the research object, conducts investigation and analysis through questionnaire interviews, and selects typical walking space for in-depth study. Through the analysis of questionnaire data, the investigation and analysis of the current situation of walking space, and the analysis of pedestrian psychological behavior activities, the paper summarizes the construction suggestions of urban walking space continuity from the three aspects of the composition of walking street, the bottom interface and side interface, and the service facilities of walking space. The walking space is not only the traffic space but also the comfortable experience and the continuity of the space.Keywords: walking space, spatial continuity, walking psychology, space reconstruction
Procedia PDF Downloads 4740558 De-Novo Structural Elucidation from Mass/NMR Spectra
Authors: Ismael Zamora, Elisabeth Ortega, Tatiana Radchenko, Guillem Plasencia
Abstract:
The structure elucidation based on Mass Spectra (MS) data of unknown substances is an unresolved problem that affects many different fields of application. The recent overview of software available for structure elucidation of small molecules has shown the demand for efficient computational tool that will be able to perform structure elucidation of unknown small molecules and peptides. We developed an algorithm for De-Novo fragment analysis based on MS data that proposes a set of scored and ranked structures that are compatible with the MS and MSMS spectra. Several different algorithms were developed depending on the initial set of fragments and the structure building processes. Also, in all cases, several scores for the final molecule ranking were computed. They were validated with small and middle databases (DB) with the eleven test set compounds. Similar results were obtained from any of the databases that contained the fragments of the expected compound. We presented an algorithm. Or De-Novo fragment analysis based on only mass spectrometry (MS) data only that proposed a set of scored/ranked structures that was validated on different types of databases and showed good results as proof of concept. Moreover, the solutions proposed by Mass Spectrometry were submitted to the prediction of NMR spectra in order to elucidate which of the proposed structures was compatible with the NMR spectra collected.Keywords: De Novo, structure elucidation, mass spectrometry, NMR
Procedia PDF Downloads 29540557 Implementation of a Low-Cost Instrumentation for an Open Cycle Wind Tunnel to Evaluate Pressure Coefficient
Authors: Cristian P. Topa, Esteban A. Valencia, Victor H. Hidalgo, Marco A. Martinez
Abstract:
Wind tunnel experiments for aerodynamic profiles display numerous advantages, such as: clean steady laminar flow, controlled environmental conditions, streamlines visualization, and real data acquisition. However, the experiment instrumentation usually is expensive, and hence, each test implies a incremented in design cost. The aim of this work is to select and implement a low-cost static pressure data acquisition system for a NACA 2412 airfoil in an open cycle wind tunnel. This work compares wind tunnel experiment with Computational Fluid Dynamics (CFD) simulation and parametric analysis. The experiment was evaluated at Reynolds of 1.65 e5, with increasing angles from -5° to 15°. The comparison between the approaches show good enough accuracy, between the experiment and CFD, additional parametric analysis results differ widely from the other methods, which complies with the lack of accuracy of the lateral approach due its simplicity.Keywords: wind tunnel, low cost instrumentation, experimental testing, CFD simulation
Procedia PDF Downloads 18040556 Synthesis of La0.8Sr0.05Ca0.15Fe0.8Co0.2O3-δ -Ce0.9Gd0.1O1.95 Composite Cathode Material for Solid Oxide Fuel Cell with Lanthanum and Cerium Recycled from Wasted Glass Polishing Powder
Authors: Jun-Lun Jiang, Bing-Sheng Yu
Abstract:
Processing of flat-panel displays generates huge amount of wasted glass polishing powder, with high concentration of cerium and other elements such as lanthanum. According to the current statistics, consumption of polishing powder was approximately ten thousand tons per year in the world. Nevertheless, wasted polishing powder was usually buried or burned. If the lanthanum and cerium compounds in the wasted polishing powder could be recycled, that will greatly reduce enterprise cost and implement waste circulation. Cathodes of SOFCs are the principal consisting of rare earth elements such as lanthanum and cerium. In this study, we recycled the lanthanum and cerium from wasted glass polishing powder by acid-solution method, and synthesized La0.8Sr0.05Ca0.15Fe0.8Co0.8O3-δ and Gd0.1Ce0.9O2 (LSCCF-GDC) composite cathode material for SOFCs by glycinenitrate combustion (GNP) method. The results show that the recovery rates of lanthanum and cerium could accomplish up to 80% and 100% under 10N nitric acid solution within one hour. Comparing with the XRD data of the commercial LSCCF-GDC powder and the LSCCF-GDC product synthesized with chemicals, we find that the LSCCF-GDC was successfully synthesized with the recycled La & Ce solution by GNP method. The effect of adding ammonia to the product was also discussed, the grain size is finer and recovery rate of the product is higher without the addition of ammonia to the solution.Keywords: glass polishing powder, acid solution, recycling, composite cathodes of solid oxide fuel, cell (SOFC), perovskite, glycine-nitrate combustion(GNP) method
Procedia PDF Downloads 27240555 Enhancing Teachers’ Professional Development Programmes by the Implementation of Flipped Learning Instruction: A Qualitative Study
Authors: Badriah Algarni
Abstract:
The pedagogy of ‘flipped learning’ is a form of blended instruction which is gaining widespread attention throughout the world. However, there is a lack of research concerning teachers’ professional development (TPD) in teachers who use flipping. The aim of this study was, therefore, to identify teachers’ perspectives on their experience of flipped PD. The study used a qualitative approach. Purposive sampling recruited nineteen teachers who participated in semi-structured, in-depth interviews. Thematic analysis was used to analyse the interview data. Overall, the teachers reported feeling more confident in their knowledge and skills after participating in flipped TPD. The analysis of the interview data revealed five overarching themes:1) increased engagement with the content; 2) better use of resources; 3) a social, collaborative environment; 4) exchange of practices and experiences; and 5) valuable online activities. These findings can encourage educators, policymakers, and trainers to consider flipped TPD as a form of PD to promote the building of teachers’ knowledge and stimulate reflective practices to improve teaching and learning practices.Keywords: engagement, flipped learning, teachers’ professional development, collaboration
Procedia PDF Downloads 9640554 An Investigation on the Sandwich Panels with Flexible and Toughened Adhesives under Flexural Loading
Authors: Emre Kara, Şura Karakuzu, Ahmet Fatih Geylan, Metehan Demir, Kadir Koç, Halil Aykul
Abstract:
The material selection in the design of the sandwich structures is very crucial aspect because of the positive or negative influences of the base materials to the mechanical properties of the entire panel. In the literature, it was presented that the selection of the skin and core materials plays very important role on the behavior of the sandwich. Beside this, the use of the correct adhesive can make the whole structure to show better mechanical results and behavior. By this way, the sandwich structures realized in the study were obtained with the combination of aluminum foam core and three different glass fiber reinforced polymer (GFRP) skins using two different commercial adhesives which are based on flexible polyurethane and toughened epoxy. The static and dynamic tests were already applied on the sandwiches with different types of adhesives. In the present work, the static three-point bending tests were performed on the sandwiches having an aluminum foam core with the thickness of 15 mm, the skins with three different types of fabrics ([0°/90°] cross ply E-Glass Biaxial stitched, [0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.75 mm) and two different commercial adhesives (flexible polyurethane and toughened epoxy based) at different values of support span distances (L= 55, 70, 80, 125 mm) by aiming the analyses of their flexural performance. The skins used in the study were produced via Vacuum Assisted Resin Transfer Molding (VARTM) technique and were easily bonded onto the aluminum foam core with flexible and toughened adhesives under a very low pressure using press machine with the alignment tabs having the total thickness of the whole panel. The main results of the flexural loading are: force-displacement curves obtained after the bending tests, peak force values, absorbed energy, collapse mechanisms, adhesion quality and the effect of the support span length and adhesive type. The experimental results presented that the sandwiches with epoxy based toughened adhesive and the skins made of S-Glass Woven fabrics indicated the best adhesion quality and mechanical properties. The sandwiches with toughened adhesive exhibited higher peak force and energy absorption values compared to the sandwiches with flexible adhesive. The core shear mode occurred in the sandwiches with flexible polyurethane based adhesive through the thickness of the core while the same mode took place in the sandwiches with toughened epoxy based adhesive along the length of the core. The use of these sandwich structures can lead to a weight reduction of the transport vehicles, providing an adequate structural strength under operating conditions.Keywords: adhesive and adhesion, aluminum foam, bending, collapse mechanisms
Procedia PDF Downloads 32840553 Film Therapy on Adolescent Body Image: A Pilot Study
Authors: Sonia David, Uma Warrier
Abstract:
Background: Film therapy is the use of commercial or non-commercial films to enhance healing for therapeutic purposes. Objectives: The mixed-method study aims to evaluate the effect of film-based counseling on body image dissatisfaction among adolescents to precisely ascertain the cause of the alteration in body image dissatisfaction due to the said intervention. Method: The one group pre-test post-test research design study using inferential statistics and thematic analysis is based on a pre-test post-test design conducted on 44 school-going adolescents between 13 and 17. The Body Shape Questionnaire (BSQ- 34) was used as a pre-test and post-test measure. The film-based counseling intervention model was used through individual counseling sessions. The analysis involved paired sample t-test used to examine the data quantitatively, and thematic analysis was used to evaluate qualitative data. Findings: The results indicated that there is a significant difference between the pre-test and post-test means. Since t(44)= 9.042 is significant at a 99% confidence level, it is ascertained that film-based counseling intervention reduces body image dissatisfaction. The five distinct themes from the thematic analysis are “acceptance, awareness, empowered to change, empathy, and reflective.” Novelty: The paper originally contributes to the repertoire of research on film therapy as a successful counseling intervention for addressing the challenges of body image dissatisfaction. This study also opens avenues for considering alteration of teaching pedagogy to include video-based learning in various subjects.Keywords: body image dissatisfaction, adolescents, film-based counselling, film therapy, acceptance and commitment therapy
Procedia PDF Downloads 29440552 Comparison of Extracellular miRNA from Different Lymphocyte Cell Lines and Isolation Methods
Authors: Christelle E. Chua, Alicia L. Ho
Abstract:
The development of a panel of differential gene expression signatures has been of interest in the field of biomarker discovery for radiation exposure. In the absence of the availability of exposed human subjects, lymphocyte cell lines have often been used as a surrogate to human whole blood, when performing ex vivo irradiation studies. The extent of variation between different lymphocyte cell lines is currently unclear, especially with regard to the expression of extracellular miRNA. This study compares the expression profile of extracellular miRNA isolated from different lymphocyte cell lines. It also compares the profile of miRNA obtained when different exosome isolation kits are used. Lymphocyte cell lines were created using lymphocytes isolated from healthy adult males of similar racial descent (Chinese American and Chinese Singaporean) and immortalised with Epstein-Barr virus. The cell lines were cultured in exosome-free cell culture media for 72h and the cell culture supernatant was removed for exosome isolation. Two exosome isolation kits were used. Total exosome isolation reagent (TEIR, ThermoFisher) is a polyethylene glycol (PEG)-based exosome precipitation kit, while ExoSpin (ES, Cell Guidance Systems) is a PEG-based exosome precipitation kit that includes an additional size exclusion chromatography step. miRNA from the isolated exosomes were isolated using miRNEASY minikit (Qiagen) and analysed using nCounter miRNA assay (Nanostring). Principal component analysis (PCA) results suggested that the overall extracellular miRNA expression profile differed between the lymphocyte cell line originating from the Chinese American donor and the cell line originating from the Chinese Singaporean donor. As the gender, age and racial origins of both donors are similar, this may suggest that there are other genetic or epigenetic differences that account for the variation in extracellular miRNA gene expression in lymphocyte cell lines. However, statistical analysis showed that only 3 miRNA genes had a fold difference > 2 at p < 0.05, suggesting that the differences may not be of that great a significance as to impact overall conclusions drawn from different cell lines. Subsequent analysis using cell lines from other donors will give further insight into the reproducibility of results when difference cell lines are used. PCA results also suggested that the method of exosome isolation impacted the expression profile. 107 miRNA had a fold difference > 2 at p < 0.05. This suggests that the inclusion of an additional size exclusion chromatography step altered the subset of the extracellular vesicles that were isolated. In conclusion, these results suggest that extracellular miRNA can be isolated and analysed from exosomes derived from lymphocyte cell lines. However, care must be taken in the choice of cell line and method of exosome isolation used.Keywords: biomarker, extracellular miRNA, isolation methods, lymphocyte cell line
Procedia PDF Downloads 19940551 Anxiety and Depression in Caregivers of Autistic Children
Authors: Mou Juliet Rebeiro, S. M. Abul Kalam Azad
Abstract:
This study was carried out to see the anxiety and depression in caregivers of autistic children. The objectives of the research were to assess depression and anxiety among caregivers of autistic children and to find out the experience of caregivers. For this purpose, the research was conducted on a sample of 39 caregivers of autistic children. Participants were taken from a special school. To collect data for this study each of the caregivers were administered questionnaire comprising scales to measure anxiety and depression and some responses of the participants were taken through interview based on a topic guide. Obtained quantitative data were analyzed by using statistical analysis and qualitative data were analyzed according to themes. Mean of the anxiety score (55.85) and depression score (108.33) is above the cutoff point. Results showed that anxiety and depression is clinically present in caregivers of autistic children. Most of the caregivers experienced behavior, emotional, cognitive and social problems of their child that is linked with anxiety and depression.Keywords: anxiety, autism, caregiver, depression
Procedia PDF Downloads 30340550 Evaluating Models Through Feature Selection Methods Using Data Driven Approach
Authors: Shital Patil, Surendra Bhosale
Abstract:
Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE
Procedia PDF Downloads 11840549 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 42940548 Selection of Suitable Reference Genes for Assessing Endurance Related Traits in a Native Pony Breed of Zanskar at High Altitude
Authors: Prince Vivek, Vijay K. Bharti, Manishi Mukesh, Ankita Sharma, Om Prakash Chaurasia, Bhuvnesh Kumar
Abstract:
High performance of endurance in equid requires adaptive changes involving physio-biochemical, and molecular responses in an attempt to regain homeostasis. We hypothesized that the identification of the suitable reference genes might be considered for assessing of endurance related traits in pony at high altitude and may ensure for individuals struggling to potent endurance trait in ponies at high altitude. A total of 12 mares of ponies, Zanskar breed, were divided into three groups, group-A (without load), group-B, (60 Kg) and group-C (80 Kg) on backpack loads were subjected to a load carry protocol, on a steep climb of 4 km uphill, and of gravel, uneven rocky surface track at an altitude of 3292 m to 3500 m (endpoint). Blood was collected before and immediately after the load carry on sodium heparin anticoagulant, and the peripheral blood mononuclear cell was separated for total RNA isolation and thereafter cDNA synthesis. Real time-PCR reactions were carried out to evaluate the mRNAs expression profile of a panel of putative internal control genes (ICGs), related to different functional classes, namely glyceraldehyde 3-phosphate dehydrogenase (GAPDH), β₂ microglobulin (β₂M), β-actin (ACTB), ribosomal protein 18 (RS18), hypoxanthine-guanine phosophoribosyltransferase (HPRT), ubiquitin B (UBB), ribosomal protein L32 (RPL32), transferrin receptor protein (TFRC), succinate dehydrogenase complex subunit A (SDHA) for normalizing the real-time quantitative polymerase chain reaction (qPCR) data of native pony’s. Three different algorithms, geNorm, NormFinder, and BestKeeper software, were used to evaluate the stability of reference genes. The result showed that GAPDH was best stable gene and stability value for the best combination of two genes was observed TFRC and β₂M. In conclusion, the geometric mean of GAPDH, TFRC and β₂M might be used for accurate normalization of transcriptional data for assessing endurance related traits in Zanskar ponies during load carrying.Keywords: endurance exercise, ubiquitin B (UBB), β₂ microglobulin (β₂M), high altitude, Zanskar ponies, reference gene
Procedia PDF Downloads 13140547 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 21440546 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 13540545 Development of a Technology Assessment Model by Patents and Customers' Review Data
Authors: Kisik Song, Sungjoo Lee
Abstract:
Recent years have seen an increasing number of patent disputes due to excessive competition in the global market and a reduced technology life-cycle; this has increased the risk of investment in technology development. While many global companies have started developing a methodology to identify promising technologies and assess for decisions, the existing methodology still has some limitations. Post hoc assessments of the new technology are not being performed, especially to determine whether the suggested technologies turned out to be promising. For example, in existing quantitative patent analysis, a patent’s citation information has served as an important metric for quality assessment, but this analysis cannot be applied to recently registered patents because such information accumulates over time. Therefore, we propose a new technology assessment model that can replace citation information and positively affect technological development based on post hoc analysis of the patents for promising technologies. Additionally, we collect customer reviews on a target technology to extract keywords that show the customers’ needs, and we determine how many keywords are covered in the new technology. Finally, we construct a portfolio (based on a technology assessment from patent information) and a customer-based marketability assessment (based on review data), and we use them to visualize the characteristics of the new technologies.Keywords: technology assessment, patents, citation information, opinion mining
Procedia PDF Downloads 46640544 Analysis of Factors Affecting Public Awareness in Paying Zakat
Authors: Roikhan Mochamad Aziz
Abstract:
This study aims to analze the interdependence of several variables simultaneously in order to simplify the form of the relationship between some of the variables studied a number of factors less than the variable studied which means it can also describe the data structure of a research. Based 100 respondents from the public, such as the people of South Tangerang, this study used factor analysis tool. The results of this study indicate that the studied variables being formed into nine factors, namely faith factors, community factors, factors of social care, confidence factor, factor income, educational factors, self-satisfaction factors, factors work, and knowledge factor. Total variance of the 9 factors is 67,30% means that all nine of these factors are factors that can contribute too paying zakat of muzakki consciousness of 67,30% while the remaining 32,70% is supported by other factors outside the 9 factors.Keywords: zakat, analysis factor, faith, education, knowledge
Procedia PDF Downloads 28040543 The Relationship between Class Attendance and Performance of Industrial Engineering Students Enrolled for a Statistics Subject at the University of Technology
Authors: Tshaudi Motsima
Abstract:
Class attendance is key at all levels of education. At tertiary level many students develop a tendency of not attending all classes without being aware of the repercussions of not attending all classes. It is important for all students to attend all classes as they can receive first-hand information and they can benefit more. The student who attends classes is likely to perform better academically than the student who does not. The aim of this paper is to assess the relationship between class attendance and academic performance of industrial engineering students. The data for this study were collected through the attendance register of students and the other data were accessed from the Integrated Tertiary Software and the Higher Education Data Analyzer Portal. Data analysis was conducted on a sample of 93 students. The results revealed that students with medium predicate scores (OR = 3.8; p = 0.027) and students with low predicate scores (OR = 21.4, p < 0.001) were significantly likely to attend less than 80% of the classes as compared to students with high predicate scores. Students with examination performance of less than 50% were likely to attend less than 80% of classes than students with examination performance of 50% and above, but the differences were not statistically significant (OR = 1.3; p = 0.750).Keywords: class attendance, examination performance, final outcome, logistic regression
Procedia PDF Downloads 13340542 Sustainable Water Supply: Rainwater Harvesting as Flood Reduction Measures in Ibadan, Nigeria
Authors: Omolara Lade, David Oloke
Abstract:
Ibadan City suffers serious water supply problems; cases of dry taps are common in virtually every part of the City. The scarcity of piped water has made communities find alternative water sources; groundwater sources being a ready source. These wells are prone to pollution due to the close proximity of septic tanks to wells, disposal of solid or liquid wastes in pits, abandoned boreholes or even stream channels and landfills. Storms and floods in Ibadan have increased with consequent devastating effects claiming over 120 lives and displacing 600 people on August 2011 alone. In this study, an analysis of the water demand and sources of supply for the city was carried out through questionnaire survey and collection of data from City’s main water supply - Water Corporation of Oyo State (WCOS), groundwater sources were explored and 30 years rainfall data were collected from Meteorological station in Ibadan. 1067 questionnaire were administered at household level with a response rate of 86.7 %. A descriptive analysis of the survey revealed that 77.1 % of the respondents did not receive water at all from WCOS while 83.8 % depend on groundwater sources. Analysis of data from WCOS revealed that main water supply is inadequate as < 10 % of the population water demand was met. Rainfall intensity is highest in June with a mean value of 188 mm, which can be harvested at community—based level and used to complement the population water demand. Rainwater harvesting if planned, and managed properly will become a valuable alternative source of managing urban flood and alleviating water scarcity in the city.Keywords: Ibadan, rainwater harvesting, sustainable water, urban flooding
Procedia PDF Downloads 18240541 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data
Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu
Abstract:
Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq
Procedia PDF Downloads 14240540 Application of Two Stages Adaptive Neuro-Fuzzy Inference System to Improve Dissolved Gas Analysis Interpretation Techniques
Authors: Kharisma Utomo Mulyodinoto, Suwarno, A. Abu-Siada
Abstract:
Dissolved Gas Analysis is one of impressive technique to detect and predict internal fault of transformers by using gas generated by transformer oil sample. A number of methods are used to interpret the dissolved gas from transformer oil sample: Doernenberg Ratio Method, IEC (International Electrotechnical Commission) Ratio Method, and Duval Triangle Method. While the assessment of dissolved gas within transformer oil samples has been standardized over the past two decades, analysis of the results is not always straight forward as it depends on personnel expertise more than mathematical formulas. To get over this limitation, this paper is aimed at improving the interpretation of Doernenberg Ratio Method, IEC Ratio Method, and Duval Triangle Method using Two Stages Adaptive Neuro-Fuzzy Inference System (ANFIS). Dissolved gas analysis data from 520 faulty transformers was analyzed to establish the proposed ANFIS model. Results show that the developed ANFIS model is accurate and can standardize the dissolved gas interpretation process with accuracy higher than 90%.Keywords: ANFIS, dissolved gas analysis, Doernenberg ratio method, Duval triangular method, IEC ratio method, transformer
Procedia PDF Downloads 14740539 Effects of Wind Load on the Tank Structures with Various Shapes and Aspect Ratios
Authors: Doo Byong Bae, Jae Jun Yoo, Il Gyu Park, Choi Seowon, Oh Chang Kook
Abstract:
There are several wind load provisions to evaluate the wind response on tank structures such as API, Euro-code, etc. the assessment of wind action applying these provisions is made by performing the finite element analysis using both linear bifurcation analysis and geometrically nonlinear analysis. By comparing the pressure patterns obtained from the analysis with the results of wind tunnel test, most appropriate wind load criteria will be recommended.Keywords: wind load, finite element analysis, linear bifurcation analysis, geometrically nonlinear analysis
Procedia PDF Downloads 63740538 Relating Symptoms with Protein Production Abnormality in Patients with Down Syndrome
Authors: Ruolan Zhou
Abstract:
Trisomy of human chromosome 21 is the primary cause of Down Syndrome (DS), and this genetic disease has significantly burdened families and countries, causing great controversy. To address this problem, the research takes an approach in exploring the relationship between genetic abnormality and this disease's symptoms, adopting several techniques, including data analysis and enrichment analysis. It also explores open-source websites, such as NCBI, DAVID, SOURCE, STRING, as well as UCSC, to complement its result. This research has analyzed the variety of genes on human chromosome 21 with simple coding, and by using analysis, it has specified the protein-coding genes, their function, and their location. By using enrichment analysis, this paper has found the abundance of keratin production-related coding-proteins on human chromosome 21. By adopting past researches, this research has attempted to disclose the relationship between trisomy of human chromosome 21 and keratin production abnormality, which might be the reason for common diseases in patients with Down Syndrome. At last, by addressing the advantage and insufficiency of this research, the discussion has provided specific directions for future research.Keywords: Down Syndrome, protein production, genome, enrichment analysis
Procedia PDF Downloads 12640537 Using ALOHA Code to Evaluate CO2 Concentration for Maanshan Nuclear Power Plant
Authors: W. S. Hsu, S. W. Chen, Y. T. Ku, Y. Chiang, J. R. Wang , J. H. Yang, C. Shih
Abstract:
ALOHA code was used to calculate the concentration under the CO2 storage burst condition for Maanshan nuclear power plant (NPP) in this study. Five main data are input into ALOHA code including location, building, chemical, atmospheric, and source data. The data from Final Safety Analysis Report (FSAR) and some reports were used in this study. The ALOHA results are compared with the failure criteria of R.G. 1.78 to confirm the habitability of control room. The result of comparison presents that the ALOHA result is below the R.G. 1.78 criteria. This implies that the habitability of control room can be maintained in this case. The sensitivity study for atmospheric parameters was performed in this study. The results show that the wind speed has the larger effect in the concentration calculation.Keywords: PWR, ALOHA, habitability, Maanshan
Procedia PDF Downloads 19840536 Delisting Wave: Corporate Financial Distress, Institutional Investors Perception and Performance of South African Listed Firms
Authors: Adebiyi Sunday Adeyanju, Kola Benson Ajeigbe, Fortune Ganda
Abstract:
In the past three decades, there has been a notable increase in the number of firms delisting from the Johannesburg Stock Exchange (JSE) in South Africa. The recent increasing rate of delisting waves of corporate listed firms motivated this study. This study aims to explore the influence of institutional investor perceptions on the financial distress experienced by delisted firms within the South African market. The study further examined the impact of financial distress on the corporate performance of delisted firms. Using the data of delisted firms spanning from 2000 to 2023 and the FGLS (Feasible Generalized Least Squares) for the short run and PCSE (Panel-Corrected Standard Errors) for the long run effects of the relationship. The finding indicated that a decline in institutional investors’ perceptions was associated with the corporate financial distress of the delisted firms, particularly during the delisting year and the few years preceding the announcement of the delisting. This study addressed the importance of investor recognition in corporate financial distress and the delisting wave among listed firms- a finding supporting the stakeholder theory. This study is an insight for companies’ managements, investors, governments, policymakers, stockbrokers, lending institutions, bankers, the stock market, and other stakeholders in their various decision-making endeavours. Based on the above findings, it was recommended that corporate managements should improve their governance strategies that can help companies’ financial performances. Accountability and transparency through governance must also be improved upon with government support through the introduction of policies and strategies and enabling an easy environment that can help companies perform better.Keywords: delisting wave, institutional investors, financial distress, corporate performance, investors’ perceptions
Procedia PDF Downloads 4540535 Efficiency Analysis of Trader in Thailand and Laos Border Trade: Case Study of Textile and Garment Products
Authors: Varutorn Tulnawat, Padcharee Phasuk
Abstract:
This paper investigates the issue of China’s dumping on border trade between Thailand and Laos. From the pass mostly, the border trade goods are traditional textile and garment mainly served locals and tourists which majority of traders is of small and medium size. In the present day the competition is fierce, the volume of trade has expanded far beyond its original intent. The major competitors in Thai-Laos border trade are China, Vietnam and also South Korea. This research measures and compares the efficiency and ability to survive the onslaught of Thai and Laos firm along Thailand (Nong Kai province) and Laos (Vientiane) border. Two attack strategies are observed, price cutting and incense such as full facilitation for big volume order. Data Envelopment Analysis (DEA) is applied to data surveyed from 90 Thai and Laos entrepreneurs. The expected results are the proportion of efficiency and inefficiency firms. Points of inefficiency and suggested improvement are also discussed.Keywords: border trade, dea, textile, garment
Procedia PDF Downloads 24540534 Estimation of Foliar Nitrogen in Selected Vegetation Communities of Uttrakhand Himalayas Using Hyperspectral Satellite Remote Sensing
Authors: Yogita Mishra, Arijit Roy, Dhruval Bhavsar
Abstract:
The study estimates the nitrogen concentration in selected vegetation community’s i.e. chir pine (pinusroxburghii) by using hyperspectral satellite data and also identified the appropriate spectral bands and nitrogen indices. The Short Wave InfraRed reflectance spectrum at 1790 nm and 1680 nm shows the maximum possible absorption by nitrogen in selected species. Among the nitrogen indices, log normalized nitrogen index performed positively and negatively too. The strong positive correlation is taken out from 1510 nm and 760 nm for the pinusroxburghii for leaf nitrogen concentration and leaf nitrogen mass while using NDNI. The regression value of R² developed by using linear equation achieved maximum at 0.7525 for the analysis of satellite image data and R² is maximum at 0.547 for ground truth data for pinusroxburghii respectively.Keywords: hyperspectral, NDNI, nitrogen concentration, regression value
Procedia PDF Downloads 29540533 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore
Authors: Qiao-Yu Warren Cai
Abstract:
Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans
Procedia PDF Downloads 10640532 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: data augmentation, mutex task generation, meta-learning, text classification.
Procedia PDF Downloads 9440531 Critically Analyzing the Application of Big Data for Smart Transportation: A Case Study of Mumbai
Authors: Tanuj Joshi
Abstract:
Smart transportation is fast emerging as a solution to modern cities’ approach mobility issues, delayed emergency response rate and high congestion on streets. Present day scenario with Google Maps, Waze, Yelp etc. demonstrates how information and communications technologies controls the intelligent transportation system. This intangible and invisible infrastructure is largely guided by the big data analytics. On the other side, the exponential increase in Indian urban population has intensified the demand for better services and infrastructure to satisfy the transportation needs of its citizens. No doubt, India’s huge internet usage is looked as an important resource to guide to achieve this. However, with a projected number of over 40 billion objects connected to the Internet by 2025, the need for systems to handle massive volume of data (big data) also arises. This research paper attempts to identify the ways of exploiting the big data variables which will aid commuters on Indian tracks. This study explores real life inputs by conducting survey and interviews to identify which gaps need to be targeted to better satisfy the customers. Several experts at Mumbai Metropolitan Region Development Authority (MMRDA), Mumbai Metro and Brihanmumbai Electric Supply and Transport (BEST) were interviewed regarding the Information Technology (IT) systems currently in use. The interviews give relevant insights and requirements into the workings of public transportation systems whereas the survey investigates the macro situation.Keywords: smart transportation, mobility issue, Mumbai transportation, big data, data analysis
Procedia PDF Downloads 178