Search results for: cardio data analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 42127

Search results for: cardio data analysis

40327 Mean Reversion in Stock Prices: Evidence from Karachi Stock Exchange

Authors: Tabassum Riaz

Abstract:

This study provides a complete examination of the stock prices behavior in the Karachi stock exchange. It examines that whether Karachi stock exchange can be described as mean reversion or not. For this purpose daily, weekly and monthly index data from Karachi stock exchange ranging from period July 1, 1997 to July 2, 2011 was taken. After employing the Multiple variance ratio and unit root tests it is concluded that stock market follow mean reversion behavior and hence have reverting trend which opens the door for the active invest management. Thus technical analysis may be help to identify the potential areas for value creation.

Keywords: mean reversion, random walk, technical analysis, Karachi stock exchange

Procedia PDF Downloads 432
40326 The Effect That the Data Assimilation of Qinghai-Tibet Plateau Has on a Precipitation Forecast

Authors: Ruixia Liu

Abstract:

Qinghai-Tibet Plateau has an important influence on the precipitation of its lower reaches. Data from remote sensing has itself advantage and numerical prediction model which assimilates RS data will be better than other. We got the assimilation data of MHS and terrestrial and sounding from GSI, and introduced the result into WRF, then got the result of RH and precipitation forecast. We found that assimilating MHS and terrestrial and sounding made the forecast on precipitation, area and the center of the precipitation more accurate by comparing the result of 1h,6h,12h, and 24h. Analyzing the difference of the initial field, we knew that the data assimilating about Qinghai-Tibet Plateau influence its lower reaches forecast by affecting on initial temperature and RH.

Keywords: Qinghai-Tibet Plateau, precipitation, data assimilation, GSI

Procedia PDF Downloads 234
40325 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 119
40324 Title: Real World Evidence a Tool to Overcome the Lack of a Comparative Arm in Drug Evaluation in the Context of Rare Diseases

Authors: Mohamed Wahba

Abstract:

Objective: To build a comparative arm for product (X) in specific gene mutated advanced gastrointestinal cancer using real world evidence to fulfill HTA requirements in drug evaluation. Methods: Data for product (X) were collected from phase II clinical trial while real world data for (Y) and (Z) were collected from US database. Real-world (RW) cohorts were matched to clinical trial base line characteristics using weighting by odds method. Outcomes included progression-free survival (PFS) and overall survival (OS) rates. Study location and participants: Internationally (product X, n=80) and from USA (Product Y and Z, n=73) Results: Two comparisons were made: trial cohort 1 (X) versus real-world cohort 1 (Z), trial cohort 2 (X) versus real-world cohort 2 (Y). For first line, the median OS was 9.7 months (95% CI 8.6- 11.5) and the median PFS was 5.2 months (95% CI 4.7- not reached) for real-world cohort 1. For second line, the median OS was 10.6 months (95% CI 4.7- 27.3) for real-world cohort 2 and the median PFS was 5.0 months (95% CI 2.1- 29.3). For OS analysis, results were statistically significant but not for PFS analysis. Conclusion: This study provided the clinical comparative outcomes needed for HTA evaluation.

Keywords: real world evidence, pharmacoeconomics, HTA agencies, oncology

Procedia PDF Downloads 90
40323 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector

Authors: Rawan K. Al-Duaij, Salem A. Al-Salem

Abstract:

Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.

Keywords: adjustment orders, data analysis, oil and gas sector, systems management

Procedia PDF Downloads 164
40322 Innovate, Educate, and Transform, Tailoring Sustainable Waste Handling Solutions for Nepal’s Small Populated Municipalities: Insights From Chandragiri Municipality

Authors: Anil Kumar Baral

Abstract:

The research introduces a ground-breaking approach to waste management, emphasizing innovation, education, and transformation. Using Chandragiri Municipality as a case study, the study advocates a shift from traditional to progressive waste management strategies, contributing an inventive waste framework, sustainability advocacy, and a transformative blueprint. The waste composition analysis highlights Chandragiri's representative profile, leading to a comprehensive plan addressing challenges and recommending a transition to a profitable waste treatment model, supported by relevant statistics. The data-driven approach incorporates the official data of waste Composition from Chandragiri Municipality as secondary data and incorporates the primary data from Chandragiri households, ensuring a nuanced perspective. Discussions on implementation, viability, and environmental preservation underscore the dual benefit of sustainability. The study includes a comparative analysis, monitoring, and evaluation framework, examining international relevance and collaboration, and conducting a social and environmental impact assessment. The results indicate the necessity for creative changes in Chandragiri's waste practices, recommending separate treatment centers in wards level rather than Municipal level, composting machines, and a centralized waste treatment plant. Educational reforms involve revising school curricula and awareness campaigns. The transformation's success hinges on reducing waste size, efficient treatment center operation, and ongoing public literacy. The conclusion summarizes key findings, envisioning a future with sustainable waste management practices deeply embedded in the community fabric.

Keywords: innovate, educate, transform, municipality, method

Procedia PDF Downloads 46
40321 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 144
40320 Time Dependent Biodistribution Modeling of 177Lu-DOTATOC Using Compartmental Analysis

Authors: M. Mousavi-Daramoroudi, H. Yousefnia, F. Abbasi-Davani, S. Zolghadri

Abstract:

In this study, 177Lu-DOTATOC was prepared under optimized conditions (radiochemical purity: > 99%, radionuclidic purity: > 99%). The percentage of injected dose per gram (%ID/g) was calculated for organs up to 168 h post injection. Compartmental model was applied to mathematical description of the drug behaviour in tissue at different times. The biodistribution data showed the significant excretion of the radioactivity from the kidneys. The adrenal and pancreas, as major expression sites for somatostatin receptor (SSTR), had significant uptake. A pharmacokinetic model of 177Lu-DOTATOC was presented by compartmental analysis which demonstrates the behavior of the complex.

Keywords: biodistribution, compartmental modeling, ¹⁷⁷Lu, Octreotide

Procedia PDF Downloads 220
40319 Factors Influencing Site Overhead Cost of Construction Projects in Egypt: A Comparative Analysis

Authors: Aya Effat, Ossama A. Hosny, Elkhayam M. Dorra

Abstract:

Estimating costs is a crucial step in construction management and should be completed at the beginning of every project to establish the project's budget. The precision of the cost estimate plays a significant role in the success of construction projects as it allows project managers to effectively manage the project's costs. Site overhead costs constitute a significant portion of construction project budgets, necessitating accurate prediction and management. These costs are influenced by a multitude of factors, requiring a thorough examination and analysis to understand their relative importance and impact. Thus, the main aim of this research is to enhance the contractor’s ability to predict and manage site overheads by identifying and analyzing the main factors influencing the site overheads costs in the Egyptian construction industry. Through a comprehensive literature review, key factors were first identified and subsequently validated using a thorough comparative analysis of data from 55 real-life construction projects. Through this comparative analysis, the relationship between each factor and site overheads percentage as well as each site overheads subcategory and each project construction phase was identified and examined. Furthermore, correlation analysis was done to check for multicollinearity and identify factors with the highest impact. The findings of this research offer valuable insights into the key drivers of site overhead costs in the Egyptian construction industry. By understanding these factors, construction professionals can make informed decisions regarding the estimation and management of site overhead costs.

Keywords: comparative analysis, cost estimation, construction management, site overheads

Procedia PDF Downloads 18
40318 Effect of Bank Specific and Macro Economic Factors on Credit Risk of Islamic Banks in Pakistan

Authors: Mati Ullah, Shams Ur Rahman

Abstract:

The purpose of this research study is to investigate the effect of macroeconomic and bank-specific factors on credit risk in Islamic banking in Pakistan. The future of financial institutions largely depends on how well they manage risks. Credit risk is an important type of risk affecting the banking sector. The current study has taken quarterly data for the period of 6 years, from 1st July 2014 to 30 Jun 2020. The data set consisted of secondary data. Data was extracted from the websites of the State Bank and World Bank and from the financial statements of the concerned banks. In this study, the Ordinary least square model was used for the analysis of the data. The results supported the hypothesis that macroeconomic factors and bank-specific factors have a significant effect on credit risk. Macroeconomic variables, Inflation and exchange rates have positive significant effects on credit risk. However, gross domestic product has a negative significant relationship with credit risk. Moreover, the corporate rate has no significant relation with credit risk. Internal variables, size, management efficiency, net profit share income and capital adequacy have been proven to influence positively and significantly the credit risk. However, loan to deposit-has a negative insignificance relationship with credit risk. The contribution of this article is that similar conclusions have been made regarding the influence of banking factors on credit risk.

Keywords: credit risk, Islamic banks, macroeconomic variables, banks specific variable

Procedia PDF Downloads 17
40317 A Qualitative Study Examining the Process of EFL Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 545
40316 A Qualitative Study Examining the Process of Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead a meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently, they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi-methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying, and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 442
40315 Accidental Compartment Fire Dynamics: Experiment, Computational Fluid Dynamics Weakness and Expert Interview Analysis

Authors: Timothy Onyenobi

Abstract:

Accidental fires and its dynamic as it relates to building compartmentation and the impact of the compartment morphology, is still an on-going area of study; especially with the use of computational fluid dynamics (CFD) modeling methods. With better knowledge on this subject come better solution recommendations by fire engineers. Interviews were carried out for this study where it was identified that the response perspectives to accidental fire were different with the fire engineer providing qualitative data which is based on “what is expected in real fires” and the fire fighters provided information on “what actually obtains in real fires”. This further led to a study and analysis of two real and comprehensively instrumented fire experiments: the Open Plan Office Project by National Institute of Standard and Technology (NIST) USA (to study time to flashover) and the TF2000 project by the Building Research Establishment (BRE) UK (to test for conformity with Building Regulation requirements). The findings from the analysis of the experiments revealed the relative yet critical weakness of fire prediction using a CFD model (usually used by fire engineers) as well as explained the differences in response perspectives of the fire engineers and firefighters from the interview analysis.

Keywords: CFD, compartment fire, experiment, fire fighters, fire engineers

Procedia PDF Downloads 338
40314 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 406
40313 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band

Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant K. Srivastava

Abstract:

An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input-output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986, and 0.9214, respectively at HH-polarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373, and 0.9428, respectively.

Keywords: bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE

Procedia PDF Downloads 428
40312 Image Recognition and Anomaly Detection Powered by GANs: A Systematic Review

Authors: Agastya Pratap Singh

Abstract:

Generative Adversarial Networks (GANs) have emerged as powerful tools in the fields of image recognition and anomaly detection due to their ability to model complex data distributions and generate realistic images. This systematic review explores recent advancements and applications of GANs in both image recognition and anomaly detection tasks. We discuss various GAN architectures, such as DCGAN, CycleGAN, and StyleGAN, which have been tailored to improve accuracy, robustness, and efficiency in visual data analysis. In image recognition, GANs have been used to enhance data augmentation, improve classification models, and generate high-quality synthetic images. In anomaly detection, GANs have proven effective in identifying rare and subtle abnormalities across various domains, including medical imaging, cybersecurity, and industrial inspection. The review also highlights the challenges and limitations associated with GAN-based methods, such as instability during training and mode collapse, and suggests future research directions to overcome these issues. Through this review, we aim to provide researchers with a comprehensive understanding of the capabilities and potential of GANs in transforming image recognition and anomaly detection practices.

Keywords: generative adversarial networks, image recognition, anomaly detection, DCGAN, CycleGAN, StyleGAN, data augmentation

Procedia PDF Downloads 20
40311 Choice of Landscape Elements for Residents' Quality of Life Living in Apartment Housing: Case Study of Bhopal, India

Authors: Ankita Srivastava, Yogesh K. Garg

Abstract:

Housing provides comforts and well being leading towards the quality of life. Earlier research had established that landscape elements enhance the residents’ quality of life through its significant experiences occur due to their presence in the housing. This paper tries to identify the preference of landscape elements that enhance residents’ quality of life living in the apartment. Hence, landscape elements that can be planned in the open spaces of housing and quality of life components were identified from the secondary data sources. Experts’ were asked to identify the quality of life components with respect to landscape elements. A questionnaire survey of residents’ living in the apartment housing in Bhopal, India was conducted. The statistical analysis of survey data facilitated to explore the preference of landscape elements for the quality of life in the apartment housing. The final ranking compiled from the experts’ opinion, residents’ perception as well as factor analysis results to have an insight of the preference of landscape elements for the quality of life living in the apartment. Preference of landscape elements present in the paper may provide an overview of planning for apartment housing that may be used by architects, planners and developers for enhancing residents’ quality of life.

Keywords: landscape elements, quality of life, residents, housing

Procedia PDF Downloads 261
40310 A Study of Blockchain Oracles

Authors: Abdeljalil Beniiche

Abstract:

The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.

Keywords: blockchain, oracles, oracles design, human oracles

Procedia PDF Downloads 136
40309 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 44
40308 Identifying Environmental Adaptive Genetic Loci in Caloteropis Procera (Estabragh): Population Genetics and Landscape Genetic Analyses

Authors: Masoud Sheidaei, Mohammad-Reza Kordasti, Fahimeh Koohdar

Abstract:

Calotropis procera (Aiton) W.T.Aiton, (Apocynaceae), is an economically and medicinally important plant species which is an evergreen, perennial shrub growing in arid and semi-arid climates, and can tolerate very low annual rainfall (150 mm) and a dry season. The plant can also tolerate temperature ran off 20 to30°C and is not frost tolerant. This plant species prefers free-draining sandy soils but can grow also in alkaline and saline soils.It is found at a range of altitudes from exposed coastal sites to medium elevations up to 1300 m. Due to morpho-physiological adaptations of C. procera and its ability to tolerate various abiotic stresses. This taxa can compete with desirable pasture species and forms dense thickets that interfere with stock management, particularly mustering activities. Caloteropis procera grows only in southern part of Iran where in comprises a limited number of geographical populations. We used different population genetics and r landscape analysis to produce data on geographical populations of C. procera based on molecular genetic study using SCoT molecular markers. First, we used spatial principal components (sPCA), as it can analyze data in a reduced space and can be used for co-dominant markers as well as presence / absence data as is the case in SCoT molecular markers. This method also carries out Moran I and Mantel tests to reveal spatial autocorrelation and test for the occurrence of Isolation by distance (IBD). We also performed Random Forest analysis to identify the importance of spatial and geographical variables on genetic diversity. Moreover, we used both RDA (Redundency analysis), and LFMM (Latent factor mixed model), to identify the genetic loci significantly associated with geographical variables. A niche modellng analysis was carried our to predict present potential area for distribution of these plants and also the area present by the year 2050. The results obtained will be discussed in this paper.

Keywords: population genetics, landscape genetic, Calotreropis procera, niche modeling, SCoT markers

Procedia PDF Downloads 93
40307 Second Order Cone Optimization Approach to Two-stage Network DEA

Authors: K. Asanimoghadam, M. Salahi, A. Jamalian

Abstract:

Data envelopment analysis is an approach to measure the efficiency of decision making units with multiple inputs and outputs. The structure of many decision making units also has decision-making subunits that are not considered in most data envelopment analysis models. Also, the inputs and outputs of the decision-making units usually are considered desirable, while in some real-world problems, the nature of some inputs or outputs are undesirable. In this thesis, we study the evaluation of the efficiency of two stage decision-making units, where some outputs are undesirable using two non-radial models, the SBM and the ASBM models. We formulate the nonlinear ASBM model as a second order cone optimization problem. Finally, we compare two models for both external and internal evaluation approaches for two real world example in the presence of undesirable outputs. The results show that, in both external and internal evaluations, the overall efficiency of ASBM model is greater than or equal to the overall efficiency value of the SBM model, and in internal evaluation, the ASBM model is more flexible than the SBM model.

Keywords: network DEA, conic optimization, undesirable output, SBM

Procedia PDF Downloads 194
40306 Decision Tree Analysis of Risk Factors for Intravenous Infiltration among Hospitalized Children: A Retrospective Study

Authors: Soon-Mi Park, Ihn Sook Jeong

Abstract:

This retrospective study was aimed to identify risk factors of intravenous (IV) infiltration for hospitalized children. The participants were 1,174 children for test and 424 children for validation, who admitted to a general hospital, received peripheral intravenous injection therapy at least once and had complete records. Data were analyzed with frequency and percentage or mean and standard deviation were calculated, and decision tree analysis was used to screen for the most important risk factors for IV infiltration for hospitalized children. The decision tree analysis showed that the most important traditional risk factors for IV infiltration were the use of ampicillin/sulbactam, IV insertion site (lower extremities), and medical department (internal medicine) both in the test sample and validation sample. The correct classification was 92.2% in the test sample and 90.1% in the validation sample. More careful attention should be made to patients who are administered ampicillin/sulbactam, have IV site in lower extremities and have internal medical problems to prevent or detect infiltration occurrence.

Keywords: decision tree analysis, intravenous infiltration, child, validation

Procedia PDF Downloads 176
40305 Performance Analysis of Scalable Secure Multicasting in Social Networking

Authors: R. Venkatesan, A. Sabari

Abstract:

Developments of social networking internet scenario are recommended for the requirements of scalable, authentic, secure group communication model like multicasting. Multicasting is an inter network service that offers efficient delivery of data from a source to multiple destinations. Even though multicast has been very successful at providing an efficient and best-effort data delivery service for huge groups, it verified complex process to expand other features to multicast in a scalable way. Separately, the requirement for secure electronic information had become gradually more apparent. Since multicast applications are deployed for mainstream purpose the need to secure multicast communications will become significant.

Keywords: multicasting, scalability, security, social network

Procedia PDF Downloads 292
40304 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication

Authors: Vedant Janapaty

Abstract:

Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.

Keywords: estuary, remote sensing, machine learning, Fourier transform

Procedia PDF Downloads 104
40303 Evolution and Obstacles Encountered in the Realm of Sports Tourism in Pakistan

Authors: Muhammad Saleem

Abstract:

Tourism stands as one of the swiftly expanding sectors globally, contributing to 10% of the overall worldwide GDP. It holds a vital role in generating income, fostering employment opportunities, alleviating poverty, facilitating foreign exchange earnings, and advancing intercultural understanding. This industry encompasses a spectrum of activities, encompassing transportation, communication, hospitality, catering, entertainment, and advertising. The objective of this study is to assess the evolution and obstacles encountered by sports tourism in Pakistan. In pursuit of this objective, relevant literature has been scrutinized, while data has been acquired from 60 respondents, employing a simple random sampling approach for analysis. The survey comprised close-ended inquiries directed towards all participants. Analytical tools such as mean, mode, median, graphs, and percentages have been employed for data analysis. The findings revealed through robust analysis, indicate that the mean, mode, and median tools consistently yield results surpassing the 70% mark, underscoring that heightened development within sports tourism significantly augments its progress. Effective governance demonstrates a favorable influence on sports tourism, with increased government-provided safety and security potentially amplifying its expansion, thus attracting a higher number of tourists and consequently propelling the growth of the sports tourism sector. This study holds substantial significance for both academic scholars and industry practitioners within Pakistan's tourism landscape, as previous explorations in this realm have been relatively limited.

Keywords: obstacles-spots, evolution-tourism, sports-pakistan, sports-obstacles-pakistan

Procedia PDF Downloads 56
40302 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 148
40301 Tax Morale Dimensions Analysis in Portugal and Spain

Authors: Cristina Sá, Carlos Gomes, António Martins

Abstract:

The reasons that explain different behaviors towards tax obligations in similar countries are not completely understood yet. The main purpose of this paper is to identify and compare the factors that influence tax morale levels in Portugal and Spain. We use data from European Values Study (EVS). Using a sample of 2,652 individuals, a factor analysis was used to extract the underlying dimensions of tax morale of Portuguese and Spanish taxpayers. Based on a factor analysis, the results of this paper show that sociological and behavioral factors, psychological factors and political factors are important for a good understanding of taxpayers’ behavior in Iberian Peninsula. This paper added value relies on the analyses of a wide range of variables and on the comparison between Portugal and Spain. Our conclusions provided insights that tax authorities and politicians can use to better focus their strategies and actions in order to increase compliance, reduce tax evasion, fight underground economy and increase country´s competitiveness.

Keywords: compliance, tax morale, Portugal, Spain

Procedia PDF Downloads 308
40300 Power Integrity Analysis of Power Delivery System in High Speed Digital FPGA Board

Authors: Anil Kumar Pandey

Abstract:

Power plane noise is the most significant source of signal integrity (SI) issues in a high-speed digital design. In this paper, power integrity (PI) analysis of multiple power planes in a power delivery system of a 12-layer high-speed FPGA board is presented. All 10 power planes of HSD board are analyzed separately by using 3D Electromagnetic based PI solver, then the transient simulation is performed on combined PI data of all planes along with voltage regulator modules (VRMs) and 70 current drawing chips to get the board level power noise coupling on different high-speed signals. De-coupling capacitors are placed between power planes and ground to reduce power noise coupling with signals.

Keywords: power integrity, power-aware signal integrity analysis, electromagnetic simulation, channel simulation

Procedia PDF Downloads 436
40299 Multi Data Management Systems in a Cluster Randomized Trial in Poor Resource Setting: The Pneumococcal Vaccine Schedules Trial

Authors: Abdoullah Nyassi, Golam Sarwar, Sarra Baldeh, Mamadou S. K. Jallow, Bai Lamin Dondeh, Isaac Osei, Grant A. Mackenzie

Abstract:

A randomized controlled trial is the "gold standard" for evaluating the efficacy of an intervention. Large-scale, cluster-randomized trials are expensive and difficult to conduct, though. To guarantee the validity and generalizability of findings, high-quality, dependable, and accurate data management systems are necessary. Robust data management systems are crucial for optimizing and validating the quality, accuracy, and dependability of trial data. Regarding the difficulties of data gathering in clinical trials in low-resource areas, there is a scarcity of literature on this subject, which may raise concerns. Effective data management systems and implementation goals should be part of trial procedures. Publicizing the creative clinical data management techniques used in clinical trials should boost public confidence in the study's conclusions and encourage further replication. In the ongoing pneumococcal vaccine schedule study in rural Gambia, this report details the development and deployment of multi-data management systems and methodologies. We implemented six different data management, synchronization, and reporting systems using Microsoft Access, RedCap, SQL, Visual Basic, Ruby, and ASP.NET. Additionally, data synchronization tools were developed to integrate data from these systems into the central server for reporting systems. Clinician, lab, and field data validation systems and methodologies are the main topics of this report. Our process development efforts across all domains were driven by the complexity of research project data collected in real-time data, online reporting, data synchronization, and ways for cleaning and verifying data. Consequently, we effectively used multi-data management systems, demonstrating the value of creative approaches in enhancing the consistency, accuracy, and reporting of trial data in a poor resource setting.

Keywords: data management, data collection, data cleaning, cluster-randomized trial

Procedia PDF Downloads 27
40298 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches

Authors: Mariam Matiashvili

Abstract:

Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.

Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon

Procedia PDF Downloads 74