Search results for: biomedical data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25543

Search results for: biomedical data

23653 SisGeo: Support System for the Research of Georeferenced Comparisons Applied to Professional and Academic Devices

Authors: Bruno D. Souza, Gerson G. Cunha, Michael O. Ferreira, Roberto Rosenhaim, Robson C. Santos, Sergio O. Santos

Abstract:

Devices and applications that use satellite-based positioning are becoming more popular day-by-day. Thus, evolution and improvement in this technology are mandatory. Accordingly, satellite georeferenced systems need to accomplish the same evolution rhythm. Either GPS (Global Positioning System) or its similar Russian GLONASS (Global Navigation Satellite System) are system samples that offer us powerful tools to plot coordinates on the earth surface. The development of this research aims the study of several aspects related to use of GPS and GLONASS technologies, given its application and collected data improvement during geodetic data acquisition. So, both relevant theoretic and practical aspects are considered. In this context, at the theoretical part, the main systems' characteristics are shown, observing its similarities and differences. At the practical part, a series of experiences are performed and obtained data packages are compared in order to demonstrate equivalence or differences among them. The evaluation methodology targets both quantitative and qualitative analysis provided by GPS and GPS/GLONASS receptors. Meanwhile, a specific collected data storage system was developed to better compare and analyze them (SisGeo - Georeferenced Research Comparison Support System).

Keywords: satellites, systems, applications, experiments, receivers

Procedia PDF Downloads 255
23652 Redefining Solar Generation Estimation: A Comprehensive Analysis of Real Utility Advanced Metering Infrastructure (AMI) Data from Various Projects in New York

Authors: Haowei Lu, Anaya Aaron

Abstract:

Understanding historical solar generation and forecasting future solar generation from interconnected Distributed Energy Resources (DER) is crucial for utility planning and interconnection studies. The existing methodology, which relies on solar radiation, weather data, and common inverter models, is becoming less accurate. Rapid advancements in DER technologies have resulted in more diverse project sites, deviating from common patterns due to various factors such as DC/AC ratio, solar panel performance, tilt angle, and the presence of DC-coupled battery energy storage systems. In this paper, the authors review 10,000 DER projects within the system and analyze the Advanced Metering Infrastructure (AMI) data for various types to demonstrate the impact of different parameters. An updated methodology is proposed for redefining historical and future solar generation in distribution feeders.

Keywords: photovoltaic system, solar energy, fluctuations, energy storage, uncertainty

Procedia PDF Downloads 32
23651 Analysis of Digital Transformation in Banking: The Hungarian Case

Authors: Éva Pintér, Péter Bagó, Nikolett Deutsch, Miklós Hetényi

Abstract:

The process of digital transformation has a profound influence on all sectors of the worldwide economy and the business environment. The influence of blockchain technology can be observed in the digital economy and e-government, rendering it an essential element of a nation's growth strategy. The banking industry is experiencing significant expansion and development of financial technology firms. Utilizing developing technologies such as artificial intelligence (AI), machine learning (ML), and big data (BD), these entrants are offering more streamlined financial solutions, promptly addressing client demands, and presenting a challenge to incumbent institutions. The advantages of digital transformation are evident in the corporate realm, and firms that resist its adoption put their survival at risk. The advent of digital technologies has revolutionized the business environment, streamlining processes and creating opportunities for enhanced communication and collaboration. Thanks to the aid of digital technologies, businesses can now swiftly and effortlessly retrieve vast quantities of information, all the while accelerating the process of creating new and improved products and services. Big data analytics is generally recognized as a transformative force in business, considered the fourth paradigm of science, and seen as the next frontier for innovation, competition, and productivity. Big data, an emerging technology that is shaping the future of the banking sector, offers numerous advantages to banks. It enables them to effectively track consumer behavior and make informed decisions, thereby enhancing their operational efficiency. Banks may embrace big data technologies to promptly and efficiently identify fraud, as well as gain insights into client preferences, which can then be leveraged to create better-tailored products and services. Moreover, the utilization of big data technology empowers banks to develop more intelligent and streamlined models for accurately recognizing and focusing on the suitable clientele with pertinent offers. There is a scarcity of research on big data analytics in the banking industry, with the majority of existing studies only examining the advantages and prospects associated with big data. Although big data technologies are crucial, there is a dearth of empirical evidence about the role of big data analytics (BDA) capabilities in bank performance. This research addresses a gap in the existing literature by introducing a model that combines the resource-based view (RBV), the technical organization environment framework (TOE), and dynamic capability theory (DC). This study investigates the influence of Big Data Analytics (BDA) utilization on the performance of market and risk management. This is supported by a comparative examination of Hungarian mobile banking services.

Keywords: big data, digital transformation, dynamic capabilities, mobile banking

Procedia PDF Downloads 65
23650 Applying Spanning Tree Graph Theory for Automatic Database Normalization

Authors: Chetneti Srisa-an

Abstract:

In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies.

Keywords: relational database, functional dependency, automatic normalization, primary key, spanning tree

Procedia PDF Downloads 353
23649 Integrating Optuna And Synthetic Data Generation For Optimized Medical Transcript Classification Using BioBERT

Authors: Sachi Nandan Mohanty, Shreya Sinha, Sweeti Sah, Shweta Sharma

Abstract:

The advancement of natural language processing has majorly influenced the field of medical transcript classification, providing a robust framework for enhancing the accuracy of clinical data processing. It has enormous potential to transform healthcare and improve people's livelihoods. This research focuses on improving the accuracy of medical transcript categorization using Bidirectional Encoder Representations from Transformers (BERT) and its specialized variants, including BioBERT, ClinicalBERT, SciBERT, and BlueBERT. The experimental work employs Optuna, an optimization framework, for hyperparameter tuning to identify the most effective variant, concluding that BioBERT yields the best performance. Furthermore, various optimizers, including Adam, RMSprop, and Layerwise adaptive large batch optimization (LAMB), were evaluated alongside BERT's default AdamW optimizer. The findings show that the LAMB optimizer achieves equally good performance as AdamW. Synthetic data generation techniques from Gretel were utilized to augment the dataset, expanding the original dataset from 5,000 to 10,000 rows. Subsequent evaluations demonstrated that the model maintained its performance with synthetic data, with the LAMB optimizer showing marginally better results. The enhanced dataset and optimized model configurations improved classification accuracy, showcasing the efficacy of the BioBERT variant and the LAMB optimizer. It resulted in an accuracy of up to 98.2% and 90.8% for the original and combined datasets, respectively.

Keywords: BioBERT, clinical data, healthcare AI, transformer models

Procedia PDF Downloads 2
23648 Producing Outdoor Design Conditions based on the Dependency between Meteorological Elements: Copula Approach

Authors: Zhichao Jiao, Craig Farnham, Jihui Yuan, Kazuo Emura

Abstract:

It is common to use the outdoor design weather data to select the air-conditioning capacity in the building design stage. The outdoor design weather data are usually comprised of multiple meteorological elements for a 24-hour period separately, but the dependency between the elements is not well considered, which may cause an overestimation of selecting air-conditioning capacity. Considering the dependency between the air temperature and global solar radiation, we used the copula approach to model the joint distributions of those two weather elements and suggest a new method of selecting more credible outdoor design conditions based on the specific simultaneous occurrence probability of air temperature and global solar radiation. In this paper, the 10-year period hourly weather data from 2001 to 2010 in Osaka, Japan, was used to analyze the dependency structure and joint distribution, the result shows that the Joe-Frank copula fit for almost all hourly data. According to calculating the simultaneous occurrence probability and the common exceeding probability of air temperature and global solar radiation, the results have shown that the maximum difference in design air temperature and global solar radiation of the day is about 2 degrees Celsius and 30W/m2, respectively.

Keywords: energy conservation, design weather database, HVAC, copula approach

Procedia PDF Downloads 268
23647 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 105
23646 The Role of Mass Sport Guidance in the Health Service Industry of China

Authors: Qiu Jian-Rong, Li Qing-Hui, Zhan Dong, Zhang Lei

Abstract:

Facing the problem of the demand of economic restructuring and risk of social economy stagnation due to the ageing of population, the Health Service Industry will play a very important role in the structure of industry in the future. During the process, the orient of Chinese sports medicine as well as the joint with preventive medicine, and the integration with data bank and cloud computing will be involved.

Keywords: China, the health service industry, mass sport, data bank

Procedia PDF Downloads 628
23645 A Numerical Investigation of Lamb Wave Damage Diagnosis for Composite Delamination Using Instantaneous Phase

Authors: Haode Huo, Jingjing He, Rui Kang, Xuefei Guan

Abstract:

This paper presents a study of Lamb wave damage diagnosis of composite delamination using instantaneous phase data. Numerical experiments are performed using the finite element method. Different sizes of delamination damages are modeled using finite element package ABAQUS. Lamb wave excitation and responses data are obtained using a pitch-catch configuration. Empirical mode decomposition is employed to extract the intrinsic mode functions (IMF). Hilbert–Huang Transform is applied to each of the resulting IMFs to obtain the instantaneous phase information. The baseline data for healthy plates are also generated using the same procedure. The size of delamination is correlated with the instantaneous phase change for damage diagnosis. It is observed that the unwrapped instantaneous phase of shows a consistent behavior with the increasing delamination size.

Keywords: delamination, lamb wave, finite element method, EMD, instantaneous phase

Procedia PDF Downloads 320
23644 Relationship between Wave Velocities and Geo-Pressures in Shallow Libyan Carbonate Reservoir

Authors: Tarek Sabri Duzan

Abstract:

Knowledge of the magnitude of Geo-pressures (Pore, Fracture & Over-burden pressures) is vital especially during drilling, completions, stimulations, Enhance Oil Recovery. Many times problems, like lost circulation could have been avoided if techniques for calculating Geo-pressures had been employed in the well planning, mud weight plan, and casing design. In this paper, we focused on the relationships between Geo-pressures and wave velocities (P-Wave (Vp) and S-wave (Vs)) in shallow Libyan carbonate reservoir in the western part of the Sirte Basin (Dahra F-Area). The data used in this report was collected from four new wells recently drilled. Those wells were scattered throughout the interested reservoir as shown in figure-1. The data used in this work are bulk density, Formation Mult -Tester (FMT) results and Acoustic wave velocities. Furthermore, Eaton Method is the most common equation used in the world, therefore this equation has been used to calculate Fracture pressure for all wells using dynamic Poisson ratio calculated by using acoustic wave velocities, FMT results for pore pressure, Overburden pressure estimated by using bulk density. Upon data analysis, it has been found that there is a linear relationship between Geo-pressures (Pore, Fracture & Over-Burden pressures) and wave velocities ratio (Vp/Vs). However, the relationship was not clear in the high-pressure area, as shown in figure-10. Therefore, it is recommended to use the output relationship utilizing the new seismic data for shallow carbonate reservoir to predict the Geo-pressures for future oil operations. More data can be collected from the high-pressure zone to investigate more about this area.

Keywords: bulk density, formation mult-tester (FMT) results, acoustic wave, carbonate shalow reservoir, d/jfield velocities

Procedia PDF Downloads 287
23643 Manufacturing Facility Location Selection: A Numercal Taxonomy Approach

Authors: Seifoddini Hamid, Mardikoraeem Mahsa, Ghorayshi Roya

Abstract:

Manufacturing facility location selection is an important strategic decision for many industrial corporations. In this paper, a new approach to the manufacturing location selection problem is proposed. In this approach, cluster analysis is employed to identify suitable manufacturing locations based on economic, social, environmental, and political factors. These factors are quantified using the existing real world data.

Keywords: manufacturing facility, manufacturing sites, real world data

Procedia PDF Downloads 563
23642 Unsupervised Domain Adaptive Text Retrieval with Query Generation

Authors: Rui Yin, Haojie Wang, Xun Li

Abstract:

Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.

Keywords: dense retrieval, query generation, unsupervised training, text retrieval

Procedia PDF Downloads 73
23641 Journals' Productivity in the Literature on Malaria in Africa

Authors: Yahya Ibrahim Harande

Abstract:

The purpose of this study was to identify the journals that published articles on malaria disease in Africa and to determine the core of productive journals from the identified journals. The data for the study were culled out from African Index Medicus (AIM) database. A total of 529 articles was gathered from 115 journal titles from 1979-2011. In order to obtain the core of productive journals, Bradford`s law was applied to the collected data. Five journal titles were identified and determined as core journals. The data used for the study was analyzed and that, the subject matter used, Malaria was in conformity with the Bradford`s law. On the aspect dispersion of the literature, English was found to be the dominant language of the journals. (80.9%) followed by French (16.5%). Followed by Portuguese (1.7%) and German (0.9%). Recommendation is hereby proposed for the medical libraries to acquire these five journals that constitute the core in malaria literature for the use of their clients. It could also help in streamlining their acquision and selection exercises. More researches in the subject area using Bibliometrics approaches are hereby recommended.

Keywords: productive journals, malaria disease literature, Bradford`s law, core journals, African scholars

Procedia PDF Downloads 345
23640 Study of Natural Convection Heat Transfer of Plate-Fin Heat Sink

Authors: Han-Taw Chen, Tzu-Hsiang Lin, Chung-Hou Lai

Abstract:

This study applies the inverse method and three-dimensional CFD commercial software in conjunction with the experimental temperature data to investigate the heat transfer and fluid flow characteristics of the plate-fin heat sink in a rectangular closed enclosure. The inverse method with the finite difference method and the experimental temperature data is applied to determine the approximate heat transfer coefficient. Later, based on the obtained results, the zero-equation turbulence model is used to obtain the heat transfer and fluid flow characteristics between two fins. To validate the accuracy of the results obtained, the comparison of the heat transfer coefficient is made. The obtained temperature at selected measurement locations of the fin is also compared with experimental data. The effect of the height of the rectangular enclosure on the obtained results is discussed.

Keywords: inverse method, fluent, heat transfer characteristics, plate-fin heat sink

Procedia PDF Downloads 389
23639 Phonetics and Phonological Investigation of Geminates and Gemination in Some Indic Languages

Authors: Hifzur Ansary

Abstract:

The aim and scope of the present research are to delve into the form of geminates and the process of gemination with special reference to Indic Languages. This work presents the results of a cross-linguistic investigation of word-medial geminate consonants. This study is a theoretical as well as experimental, that is, it is based not only on impressionistic data from Indic languages but also on an instrumental analysis of the data. The primary data have been collected from the native speakers. The secondary data have been collected from printed materials such as journals, grammar books and other published articles. The observations made in this study have been checked with a number of educated native speakers of Bangla and Telugu. The study focuses on geminates and gemination in Bangla (Indo-Aryan Language Family) and Telugu (Dravidian Language family) exhaustively. Thus this study also attempts to posit the valid geminates in Bangali and Telugu and provides an account of gemination in these languages. It also makes a comparison of singletons and geminated consonants. It describes the distribution of geminate phonemes and non-geminate phonemes of Bangla and Telugu. The present research would also investigate the vowel lengthening in Bangla with respect to gemination. The study also explains how gemination processes present in Indian Languages are transferred to Indian English.

Keywords: geminate consonant, singleton-geminate contrast, different types of assimilation, gemination derives from borrowed words

Procedia PDF Downloads 289
23638 Supply Chain Risk Management: A Meta-Study of Empirical Research

Authors: Shoufeng Cao, Kim Bryceson, Damian Hine

Abstract:

The existing supply chain risk management (SCRM) research is currently chaotic and somewhat disorganized, and the topic has been addressed conceptually more often than empirically. This paper, using both qualitative and quantitative data, employs a modified Meta-study method to investigate the SCRM empirical research published in quality journals over the period of 12 years (2004-2015). The purpose is to outline the extent research trends and the employed research methodologies (i.e., research method, data collection and data analysis) across the sub-field that will guide future research. The synthesized findings indicate that empirical study on risk ripple effect along an entire supply chain, industry-specific supply chain risk management and global/export supply chain risk management has not yet given much attention than it deserves in the SCRM field. Besides, it is suggested that future empirical research should employ multiple and/or mixed methods and multi-source data collection techniques to reduce common method bias and single-source bias, thus improving research validity and reliability. In conclusion, this paper helps to stimulate more quality empirical research in the SCRM field via identifying promising research directions and providing some methodology guidelines.

Keywords: empirical research, meta-study, methodology guideline, research direction, supply chain risk management

Procedia PDF Downloads 317
23637 An Analysis into Global Suicide Trends and Their Relation to Current Events Through a Socio-Cultural Lens

Authors: Lyndsey Kim

Abstract:

We utilized country-level data on suicide rates from 1985 through 2015 provided by the WHO to explore global trends as well as country-specific trends. First, we find that up until 1995, there was an increase in suicide rates globally, followed by a steep decline in deaths. This observation is largely driven by the data from Europe, where suicides are prominent but steadily declining. Second, men are more likely to commit suicide than women across the world over the years. Third, the older generation is more likely to commit suicide than youth and adults. Finally, we turn to Durkheim’s theory and use it as a lens to understand trends in suicide across time and countries and attempt to identify social and economic events that might explain patterns that we observe. For example, we discovered a drastically different pattern in suicide rates in the US, with a steep increase in suicides in the early 2000s. We hypothesize this might be driven by both the 9/11 attacks and the recession of 2008.

Keywords: suicide trends, current events, data analysis, world health organization, durkheim theory

Procedia PDF Downloads 94
23636 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 259
23635 Data Quality on Regular Childhood Immunization Programme at Degehabur District: Somali Region, Ethiopia

Authors: Eyob Seife

Abstract:

Immunization is a life-saving intervention which prevents needless suffering through sickness, disability, and death. Emphasis on data quality and use will become even stronger with the development of the immunization agenda 2030 (IA2030). Quality of data is a key factor in generating reliable health information that enables monitoring progress, financial planning, vaccine forecasting capacities, and making decisions for continuous improvement of the national immunization program. However, ensuring data of sufficient quality and promoting an information-use culture at the point of the collection remains critical and challenging, especially in hard-to-reach and pastoralist areas where Degehabur district is selected based on a hypothesis of ‘there is no difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical, and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Degehabur district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers, and reporting documents were reviewed at 5 health facilities (2 health centers and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and the district health office. A quality index (QI) was assessed, and the accuracy ratio formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed both over-reporting and under-reporting were observed at health posts when computing the accuracy ratio of the tally sheet to health post reports found at health centers for almost all antigens verified where pentavalent 1 was 88.3%, 60.4%, and 125.6% for Health posts A, B, and C respectively. For first-dose measles-containing vaccines (MCV), similarly, the accuracy ratio was found to be 126.6%, 42.6%, and 140.9% for Health posts A, B, and C, respectively. The accuracy ratio for fully immunized children also showed 0% for health posts A and B and 100% for health post-C. A relatively better accuracy ratio was seen at health centers where the first pentavalent dose was 97.4% and 103.3% for health centers A and B, while a first dose of measles-containing vaccines (MCV) was 89.2% and 100.9% for health centers A and B, respectively. A quality index (QI) of all facilities also showed results between the maximum of 33.33% and a minimum of 0%. Most of the verified immunization data accuracy ratios were found to be relatively better at the health center level. However, the quality of the monitoring system is poor at all levels, besides poor data accuracy at all health posts. So attention should be given to improving the capacity of staff and quality of monitoring system components, namely recording, reporting, archiving, data analysis, and using information for decision at all levels, especially in pastoralist areas where such kinds of study findings need to be improved beside to improving the data quality at root and health posts level.

Keywords: accuracy ratio, Degehabur District, regular childhood immunization program, quality of monitoring system, Somali Region-Ethiopia

Procedia PDF Downloads 107
23634 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: degree, initial cluster center, k-means, minimum spanning tree

Procedia PDF Downloads 411
23633 Novel Adaptive Radial Basis Function Neural Networks Based Approach for Short-Term Load Forecasting of Jordanian Power Grid

Authors: Eyad Almaita

Abstract:

In this paper, a novel adaptive Radial Basis Function Neural Networks (RBFNN) algorithm is used to forecast the hour by hour electrical load demand in Jordan. A small and effective RBFNN model is used to forecast the hourly total load demand based on a small number of features. These features are; the load in the previous day, the load in the same day in the previous week, the temperature in the same hour, the hour number, the day number, and the day type. The proposed adaptive RBFNN model can enhance the reliability of the conventional RBFNN after embedding the network in the system. This is achieved by introducing an adaptive algorithm that allows the change of the weights of the RBFNN after the training process is completed, which will eliminates the need to retrain the RBFNN model again. The data used in this paper is real data measured by National Electrical Power co. (Jordan). The data for the period Jan./2012-April/2013 is used train the RBFNN models and the data for the period May/2013- Sep. /2013 is used to validate the models effectiveness.

Keywords: load forecasting, adaptive neural network, radial basis function, short-term, electricity consumption

Procedia PDF Downloads 345
23632 Speed Optimization Model for Reducing Fuel Consumption Based on Shipping Log Data

Authors: Ayudhia P. Gusti, Semin

Abstract:

It is known that total operating cost of a vessel is dominated by the cost of fuel consumption. How to reduce the fuel cost of ship so that the operational costs of fuel can be minimized is the question that arises. As the basis of these kinds of problem, sailing speed determination is an important factor to be considered by a shipping company. Optimal speed determination will give a significant influence on the route and berth schedule of ships, which also affect vessel operating costs. The purpose of this paper is to clarify some important issues about ship speed optimization. Sailing speed, displacement, sailing time, and specific fuel consumption were obtained from shipping log data to be further analyzed for modeling the speed optimization. The presented speed optimization model is expected to affect the fuel consumption and to reduce the cost of fuel consumption.

Keywords: maritime transportation, reducing fuel, shipping log data, speed optimization

Procedia PDF Downloads 568
23631 An Assessment of the Temperature Change Scenarios Using RS and GIS Techniques: A Case Study of Sindh

Authors: Jan Muhammad, Saad Malik, Fadia W. Al-Azawi, Ali Imran

Abstract:

In the era of climate variability, rising temperatures are the most significant aspect. In this study PRECIS model data and observed data are used for assessing the temperature change scenarios of Sindh province during the first half of present century. Observed data from various meteorological stations of Sindh are the primary source for temperature change detection. The current scenario (1961–1990) and the future one (2010-2050) are acted by the PRECIS Regional Climate Model at a spatial resolution of 25 * 25 km. Regional Climate Model (RCM) can yield reasonably suitable projections to be used for climate-scenario. The main objective of the study is to map the simulated temperature as obtained from climate model-PRECIS and their comparison with observed temperatures. The analysis is done on all the districts of Sindh in order to have a more precise picture of temperature change scenarios. According to results the temperature is likely to increases by 1.5 - 2.1°C by 2050, compared to the baseline temperature of 1961-1990. The model assesses more accurate values in northern districts of Sindh as compared to the coastal belt of Sindh. All the district of the Sindh province exhibit an increasing trend in the mean temperature scenarios and each decade seems to be warmer than the previous one. An understanding of the change in temperatures is very vital for various sectors such as weather forecasting, water, agriculture, and health, etc.

Keywords: PRECIS Model, real observed data, Arc GIS, interpolation techniques

Procedia PDF Downloads 249
23630 Difference Expansion Based Reversible Data Hiding Scheme Using Edge Directions

Authors: Toshanlal Meenpal, Ankita Meenpal

Abstract:

A very important technique in reversible data hiding field is Difference expansion. Secret message as well as the cover image may be completely recovered without any distortion after data extraction process due to reversibility feature. In general, in any difference expansion scheme embedding is performed by integer transform in the difference image acquired by grouping two neighboring pixel values. This paper proposes an improved reversible difference expansion embedding scheme. We mainly consider edge direction for embedding by modifying the difference of two neighboring pixels values. In general, the larger difference tends to bring a degraded stego image quality than the smaller difference. Image quality in the range of 0.5 to 3.7 dB in average is achieved by the proposed scheme, which is shown through the experimental results. However payload wise it achieves almost similar capacity in comparisons with previous method.

Keywords: information hiding, wedge direction, difference expansion, integer transform

Procedia PDF Downloads 484
23629 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system

Procedia PDF Downloads 272
23628 Investigation of the Litho-Structure of Ilesa Using High Resolution Aeromagnetic Data

Authors: Oladejo Olagoke Peter, Adagunodo T. A., Ogunkoya C. O.

Abstract:

The research investigated the arrangement of some geological features under Ilesa employing aeromagnetic data. The obtained data was subjected to various data filtering and processing techniques, which are Total Horizontal Derivative (THD), Depth Continuation and Analytical Signal Amplitude using Geosoft Oasis Montaj 6.4.2 software. The Reduced to the Equator –Total Magnetic Intensity (TRE-TMI) outcomes reveal significant magnetic anomalies, with high magnitude (55.1 to 155 nT) predominantly at the Northwest half of the area. Intermediate magnetic susceptibility, ranging between 6.0 to 55.1 nT, dominates the eastern part, separated by depressions and uplifts. The southern part of the area exhibits a magnetic field of low intensity, ranging from -76.6 to 6.0 nT. The lineaments exhibit varying lengths ranging from 2.5 and 16.0 km. Analyzing the Rose Diagram and the analytical signal amplitude indicates structural styles mainly of E-W and NE-SW orientations, particularly evident in the western, SW and NE regions with an amplitude of 0.0318nT/m. The identified faults in the area demonstrate orientations of NNW-SSE, NNE-SSW and WNW-ESE, situated at depths ranging from 500 to 750 m. Considering the divergence magnetic susceptibility, structural style or orientation of the lineaments, identified fault and their depth, these lithological features could serve as a valuable foundation for assessing ground motion, particularly in the presence of sufficient seismic energy.

Keywords: lineament, aeromagnetic, anomaly, fault, magnetic

Procedia PDF Downloads 76
23627 Biomedical Countermeasures to Category a Biological Agents

Authors: Laura Cochrane

Abstract:

The United States Centers for Disease Control and Prevention has established three categories of biological agents based on their ease of spread and the severity of the disease they cause. Category A biological agents are the highest priority because of their high degree of morbidity and mortality, ease of dissemination, the potential to cause social disruption and panic, special requirements for public health preparedness, and past use as a biological weapon. Despite the threat of Category A biological agents, opportunities for medical intervention exist. This work summarizes public information, consolidated and reviewed across the situational usefulness and disease awareness to offer discussion to three specific Category A agents: anthrax (Bacillus anthracis), botulism (Clostridium botulinum toxin), and smallpox (variola major), and provides an overview on the management of medical countermeasures available to treat these three (3) different types of pathogens. The medical countermeasures are discussed in the setting of pre-exposure prophylaxis, post-exposure prophylaxis, and therapeutic treatments to provide a framework for requirements in public health preparedness.

Keywords: anthrax, botulism, smallpox, medical countermeasures

Procedia PDF Downloads 77
23626 Consumer Welfare in the Platform Economy

Authors: Prama Mukhopadhyay

Abstract:

Starting from transport to food, today’s world platform economy and digital markets have taken over almost every sphere of consumers’ lives. Sellers and buyers are getting connected through platforms, which is acting as an intermediary. It has made consumer’s life easier in terms of time, price, choice and other factors. Having said that, there are several concerns regarding platforms. There are competition law concerns like unfair pricing, deep discounting by the platforms which affect the consumer welfare. Apart from that, the biggest problem is lack of transparency with respect to the business models, how it operates, price calculation, etc. In most of the cases, consumers are unaware of how their personal data are being used. In most of the cases, they are unaware of how algorithm uses their personal data to determine the price of the product or even to show the relevant products using their previous searches. Using personal or non-personal data without consumer’s consent is a huge legal concern. In addition to this, another major issue lies with the question of liability. If a dispute arises, who will be responsible? The seller or the platform? For example, if someone ordered food through a food delivery app and the food was bad, in this situation who will be liable: the restaurant or the food delivery platform? In this paper, the researcher tries to examine the legal concern related to platform economy from the consumer protection and consumer welfare perspectives. The paper analyses the cases from different jurisdictions and approach taken by the judiciaries. The author compares the existing legislation of EU, US and other Asian Countries and tries to highlight the best practices.

Keywords: competition, consumer, data, platform

Procedia PDF Downloads 144
23625 Competing Risks Modeling Using within Node Homogeneity Classification Tree

Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya

Abstract:

To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.

Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree

Procedia PDF Downloads 272
23624 Design and Development of a Computerized Medical Record System for Hospitals in Remote Areas

Authors: Grace Omowunmi Soyebi

Abstract:

A computerized medical record system is a collection of medical information about a person that is stored on a computer. One principal problem of most hospitals in rural areas is using the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved; this may cause an unexpected to happen to the patient. This data mining application is to be designed using a structured system analysis and design method which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the design and implementation of a computerized medical record system. This computerized system will replace the file management system and help to quickly retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.

Keywords: programming, data, software development, innovation

Procedia PDF Downloads 87