Search results for: heterogeneous massive data
23545 Analyzing the Evolution of Adverse Events in Pharmacovigilance: A Data-Driven Approach
Authors: Kwaku Damoah
Abstract:
This study presents a comprehensive data-driven analysis to understand the evolution of adverse events (AEs) in pharmacovigilance. Utilizing data from the FDA Adverse Event Reporting System (FAERS), we employed three analytical methods: rank-based, frequency-based, and percentage change analyses. These methods assessed temporal trends and patterns in AE reporting, focusing on various drug-active ingredients and patient demographics. Our findings reveal significant trends in AE occurrences, with both increasing and decreasing patterns from 2000 to 2023. This research highlights the importance of continuous monitoring and advanced analysis in pharmacovigilance, offering valuable insights for healthcare professionals and policymakers to enhance drug safety.Keywords: event analysis, FDA adverse event reporting system, pharmacovigilance, temporal trend analysis
Procedia PDF Downloads 4823544 Agglomerative Hierarchical Clustering Using the Tθ Family of Similarity Measures
Authors: Salima Kouici, Abdelkader Khelladi
Abstract:
In this work, we begin with the presentation of the Tθ family of usual similarity measures concerning multidimensional binary data. Subsequently, some properties of these measures are proposed. Finally, the impact of the use of different inter-elements measures on the results of the Agglomerative Hierarchical Clustering Methods is studied.Keywords: binary data, similarity measure, Tθ measures, agglomerative hierarchical clustering
Procedia PDF Downloads 48223543 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies
Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal
Abstract:
Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model
Procedia PDF Downloads 22623542 Spatial-Temporal Clustering Characteristics of Dengue in the Northern Region of Sri Lanka, 2010-2013
Authors: Sumiko Anno, Keiji Imaoka, Takeo Tadono, Tamotsu Igarashi, Subramaniam Sivaganesh, Selvam Kannathasan, Vaithehi Kumaran, Sinnathamby Noble Surendran
Abstract:
Dengue outbreaks are affected by biological, ecological, socio-economic and demographic factors that vary over time and space. These factors have been examined separately and still require systematic clarification. The present study aimed to investigate the spatial-temporal clustering relationships between these factors and dengue outbreaks in the northern region of Sri Lanka. Remote sensing (RS) data gathered from a plurality of satellites were used to develop an index comprising rainfall, humidity and temperature data. RS data gathered by ALOS/AVNIR-2 were used to detect urbanization, and a digital land cover map was used to extract land cover information. Other data on relevant factors and dengue outbreaks were collected through institutions and extant databases. The analyzed RS data and databases were integrated into geographic information systems, enabling temporal analysis, spatial statistical analysis and space-time clustering analysis. Our present results showed that increases in the number of the combination of ecological factor and socio-economic and demographic factors with above the average or the presence contribute to significantly high rates of space-time dengue clusters.Keywords: ALOS/AVNIR-2, dengue, space-time clustering analysis, Sri Lanka
Procedia PDF Downloads 47623541 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 15823540 Exchanging Radiology Reporting System with Electronic Health Record: Designing a Conceptual Model
Authors: Azadeh Bashiri
Abstract:
Introduction: In order to better designing of electronic health record system in Iran, integration of health information systems based on a common language must be done to interpret and exchange this information with this system is required. Background: This study, provides a conceptual model of radiology reporting system using unified modeling language. The proposed model can solve the problem of integration this information system with electronic health record system. By using this model and design its service based, easily connect to electronic health record in Iran and facilitate transfer radiology report data. Methods: This is a cross-sectional study that was conducted in 2013. The student community was 22 experts that working at the Imaging Center in Imam Khomeini Hospital in Tehran and the sample was accorded with the community. Research tool was a questionnaire that prepared by the researcher to determine the information requirements. Content validity and test-retest method was used to measure validity and reliability of questioner respectively. Data analyzed with average index, using SPSS. Also, Visual Paradigm software was used to design a conceptual model. Result: Based on the requirements assessment of experts and related texts, administrative, demographic and clinical data and radiological examination results and if the anesthesia procedure performed, anesthesia data suggested as minimum data set for radiology report and based it class diagram designed. Also by identifying radiology reporting system process, use case was drawn. Conclusion: According to the application of radiology reports in electronic health record system for diagnosing and managing of clinical problem of the patient, provide the conceptual Model for radiology reporting system; in order to systematically design it, the problem of data sharing between these systems and electronic health records system would eliminate.Keywords: structured radiology report, information needs, minimum data set, electronic health record system in Iran
Procedia PDF Downloads 25423539 University Students’ Perception on Public Transit in Dhaka City
Authors: Mosabbir Pasha, Ijaj Mahmud Chowdhury, M. A. Afrahim Bhuiyann
Abstract:
With the increasing population and intensive land use, huge traffic demand is generating worldwide both in developing and developed countries. As a developing country, Bangladesh is also facing the same problem in recent years by producing huge numbers of daily trips. As a matter of fact, extensive traffic demand is increasing day by day. Also, transport system in Dhaka is heterogeneous, reflecting the heterogeneity in the socio-economic and land use patterns. As a matter of fact, trips produced here are for different purposes such as work, business, educational etc. Due to the significant concentration of educational institutions a large share of the trips are generated by educational purpose. And one of the major percentages of educational trips is produced by university going students and most of them are travelled by car, bus, train, taxi, rickshaw etc. The aim of the study was to find out the university students’ perception on public transit ridership. A survey was conducted among 330 students from eight different universities. It was found out that 26% of the trips produced by university going students are travelled by public bus service and only 5% are by train. Percentage of car share is 16% and 12% of the trips are travelled by private taxi. From the study, it has been found that more than 42 percent student’s family resides outside of Dhaka, eventually they prefer bus instead of other options. Again those who chose to walk most of the time, of them, over 40 percent students’ family reside outside of Dhaka and of them over 85 percent students have a tendency to live in a mess. They generally choose a neighboring location to their respective university so that they can reach their destination by walk. On the other hand, those who travel by car 80 percent of their family reside inside Dhaka. The study also revealed that the most important reason that restricts students not to use public transit is poor service. Negative attitudes such as discomfort, uneasiness in using public transit also reduces the usage of public transit. The poor waiting area is another major cause of not using public transit. Insufficient security also plays a significant role in not using public transit. On the contrary, the fare is not a problem for students those who use public transit as a mode of transportation. Students also think stations are not far away from their home or institution and they do not need to wait long for the buses or trains. It was also found accessibility to public transit is moderate.Keywords: traffic demand, fare, poor service, public transit ridership
Procedia PDF Downloads 26823538 An Application of Remote Sensing for Modeling Local Warming Trend
Authors: Khan R. Rahaman, Quazi K. Hassan
Abstract:
Global changes in climate, environment, economies, populations, governments, institutions, and cultures converge in localities. Changes at a local scale, in turn, contribute to global changes as well as being affected by them. Our hypothesis is built on a consideration that temperature does vary at local level (i.e., termed as local warming) in comparison to the predicted models at the regional and/or global scale. To date, the bulk of the research relating local places to global climate change has been top-down, from the global toward the local, concentrating on methods of impact analysis that use as a starting point climate change scenarios derived from global models, even though these have little regional or local specificity. Thus, our focus is to understand such trends over the southern Alberta, which will enable decision makers, scientists, researcher community, and local people to adapt their policies based on local level temperature variations and to act accordingly. Specific objectives in this study are: (i) to understand the local warming (temperature in particular) trend in context of temperature normal during the period 1961-2010 at point locations using meteorological data; (ii) to validate the data by using specific yearly data, and (iii) to delineate the spatial extent of the local warming trends and understanding influential factors to adopt situation by local governments. Existing data has brought the evidence of such changes and future research emphasis will be given to validate this hypothesis based on remotely sensed data (i.e. MODIS product by NASA).Keywords: local warming, climate change, urban area, Alberta, Canada
Procedia PDF Downloads 33923537 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges
Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch
Abstract:
Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.Keywords: big data interpretation, datathon, systems toxicology, verification
Procedia PDF Downloads 27823536 Scalable Learning of Tree-Based Models on Sparsely Representable Data
Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou
Abstract:
Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.Keywords: big data, sparsely representable data, tree-based models, scalable learning
Procedia PDF Downloads 26323535 On Estimating the Low Income Proportion with Several Auxiliary Variables
Authors: Juan F. Muñoz-Rosas, Rosa M. García-Fernández, Encarnación Álvarez-Verdejo, Pablo J. Moya-Fernández
Abstract:
Poverty measurement is a very important topic in many studies in social sciences. One of the most important indicators when measuring poverty is the low income proportion. This indicator gives the proportion of people of a population classified as poor. This indicator is generally unknown, and for this reason, it is estimated by using survey data, which are obtained by official surveys carried out by many statistical agencies such as Eurostat. The main feature of the mentioned survey data is the fact that they contain several variables. The variable used to estimate the low income proportion is called as the variable of interest. The survey data may contain several additional variables, also named as the auxiliary variables, related to the variable of interest, and if this is the situation, they could be used to improve the estimation of the low income proportion. In this paper, we use Monte Carlo simulation studies to analyze numerically the performance of estimators based on several auxiliary variables. In this simulation study, we considered real data sets obtained from the 2011 European Union Survey on Income and Living Condition. Results derived from this study indicate that the estimators based on auxiliary variables are more accurate than the naive estimator.Keywords: inclusion probability, poverty, poverty line, survey sampling
Procedia PDF Downloads 45823534 TessPy – Spatial Tessellation Made Easy
Authors: Jonas Hamann, Siavash Saki, Tobias Hagen
Abstract:
Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies
Procedia PDF Downloads 12823533 A CFD Analysis of Hydraulic Characteristics of the Rod Bundles in the BREST-OD-300 Wire-Spaced Fuel Assemblies
Authors: Dmitry V. Fomichev, Vladimir V. Solonin
Abstract:
This paper presents the findings from a numerical simulation of the flow in 37-rod fuel assembly models spaced by a double-wire trapezoidal wrapping as applied to the BREST-OD-300 experimental nuclear reactor. Data on a high static pressure distribution within the models, and equations for determining the fuel bundle flow friction factors have been obtained. Recommendations are provided on using the closing turbulence models available in the ANSYS Fluent. A comparative analysis has been performed against the existing empirical equations for determining the flow friction factors. The calculated and experimental data fit has been shown. An analysis into the experimental data and results of the numerical simulation of the BREST-OD-300 fuel rod assembly hydrodynamic performance are presented.Keywords: BREST-OD-300, ware-spaces, fuel assembly, computation fluid dynamics
Procedia PDF Downloads 38223532 Analysis of Lead Time Delays in Supply Chain: A Case Study
Authors: Abdel-Aziz M. Mohamed, Nermeen Coutry
Abstract:
Lead time is an important measure of supply chain performance. It impacts both customer satisfactions as well as the total cost of inventory. This paper presents the result of a study on the analysis of the customer order lead-time for a multinational company. In the study, the lead time was divided into three stages: order entry, order fulfillment, and order delivery. A sample of size 2,425 order lines from the company records were considered for this study. The sample data includes information regarding customer orders from the time of order entry until order delivery. Data regarding the lead time of each sage for different orders were also provided. Summary statistics on lead time data reveals that about 30% of the orders were delivered after the scheduled due date. The result of the multiple linear regression analysis technique revealed that component type, logistics parameter, order size and the customer type have significant impact on lead time. Data analysis on the stages of lead time indicates that stage 2 consumes over 50% of the lead time. Pareto analysis was made to study the reasons for the customer order delay in each of the 3 stages. Recommendation was given to resolve the problem.Keywords: lead time reduction, customer satisfaction, service quality, statistical analysis
Procedia PDF Downloads 73123531 Analyzing Medical Workflows Using Market Basket Analysis
Authors: Mohit Kumar, Mayur Betharia
Abstract:
Healthcare domain, with the emergence of Electronic Medical Record (EMR), collects a lot of data which have been attracting Data Mining expert’s interest. In the past, doctors have relied on their intuition while making critical clinical decisions. This paper presents the means to analyze the Medical workflows to get business insights out of huge dumped medical databases. Market Basket Analysis (MBA) which is a special data mining technique, has been widely used in marketing and e-commerce field to discover the association between products bought together by customers. It helps businesses in increasing their sales by analyzing the purchasing behavior of customers and pitching the right customer with the right product. This paper is an attempt to demonstrate Market Basket Analysis applications in healthcare. In particular, it discusses the Market Basket Analysis Algorithm ‘Apriori’ applications within healthcare in major areas such as analyzing the workflow of diagnostic procedures, Up-selling and Cross-selling of Healthcare Systems, designing healthcare systems more user-friendly. In the paper, we have demonstrated the MBA applications using Angiography Systems, but can be extrapolated to other modalities as well.Keywords: data mining, market basket analysis, healthcare applications, knowledge discovery in healthcare databases, customer relationship management, healthcare systems
Procedia PDF Downloads 17223530 Infrastructural Investment and Economic Growth in Indian States: A Panel Data Analysis
Authors: Jonardan Koner, Basabi Bhattacharya, Avinash Purandare
Abstract:
The study is focused to find out the impact of infrastructural investment on economic development in Indian states. The study uses panel data analysis to measure the impact of infrastructural investment on Real Gross Domestic Product in Indian States. Panel data analysis incorporates Unit Root Test, Cointegration Teat, Pooled Ordinary Least Squares, Fixed Effect Approach, Random Effect Approach, Hausman Test. The study analyzes panel data (annual in frequency) ranging from 1991 to 2012 and concludes that infrastructural investment has a desirable impact on economic development in Indian. Finally, the study reveals that the infrastructural investment significantly explains the variation of economic indicator.Keywords: infrastructural investment, real GDP, unit root test, cointegration teat, pooled ordinary least squares, fixed effect approach, random effect approach, Hausman test
Procedia PDF Downloads 40223529 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models
Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana
Abstract:
The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.Keywords: electricity demand forecasting, load shedding, demand side management, data science
Procedia PDF Downloads 6123528 Corporate Governance and Share Prices: Firm Level Review in Turkey
Authors: Raif Parlakkaya, Ahmet Diken, Erkan Kara
Abstract:
This paper examines the relationship between corporate governance rating and stock prices of 26 Turkish firms listed in Turkish stock exchange (Borsa Istanbul) by using panel data analysis over five-year period. The paper also investigates the stock performance of firms with governance rating with regards to the market portfolio (i.e. BIST 100 Index) both prior and after governance scoring began. The empirical results show that there is no relation between corporate governance rating and stock prices when using panel data for annual variation in both rating score and stock prices. Further analysis indicates surprising results that while the selected firms outperform the market significantly prior to rating, the same performance does not continue afterwards.Keywords: corporate governance, stock price, performance, panel data analysis
Procedia PDF Downloads 39323527 Special Education Teachers’ Knowledge and Application of the Concept of Curriculum Adaptation for Learners with Special Education Needs in Zambia
Authors: Kenneth Kapalu Muzata, Dikeledi Mahlo, Pinkie Mabunda Mabunda
Abstract:
This paper presents results of a study conducted to establish special education teachers’ knowledge and application of curriculum adaptation of the 2013 revised curriculum in Zambia. From a sample of 134 respondents (120 special education teachers, 12 education officers, and 2 curriculum specialists), the study collected both quantitative and qualitative data to establish whether teachers understood and applied the concept of curriculum adaptation in teaching learners with special education needs. To obtain data validity and reliability, the researchers collected data by use of mixed methods. Semi-structured questionnaires and interviews were administered. Lesson Observations and post-lesson discussions were conducted on 12 selected teachers from the 120 sample that answered the questionnaires. Frequencies, percentages, and significant differences were derived through the statistical package for social sciences. Qualitative data were analyzed with the help of NVIVO qualitative software to create themes and obtain coding density to help with conclusions. Both quantitative and qualitative data were concurrently compared and related. The results revealed that special education teachers lacked a thorough understanding of the concept of curriculum adaptation, thus denying learners with special education needs the opportunity to benefit from the revised curriculum. The teachers were not oriented on the revised curriculum and hence facing numerous challenges trying to adapt the curriculum. The study recommended training of special education teachers in curriculum adaptation.Keywords: curriculum adaptation, special education, learners with special education needs, special education teachers
Procedia PDF Downloads 17623526 Simultaneous Determination of Methotrexate and Aspirin Using Fourier Transform Convolution Emission Data under Non-Parametric Linear Regression Method
Authors: Marwa A. A. Ragab, Hadir M. Maher, Eman I. El-Kimary
Abstract:
Co-administration of methotrexate (MTX) and aspirin (ASP) can cause a pharmacokinetic interaction and a subsequent increase in blood MTX concentrations which may increase the risk of MTX toxicity. Therefore, it is important to develop a sensitive, selective, accurate and precise method for their simultaneous determination in urine. A new hybrid chemometric method has been applied to the emission response data of the two drugs. Spectrofluorimetric method for determination of MTX through measurement of its acid-degradation product, 4-amino-4-deoxy-10-methylpteroic acid (4-AMP), was developed. Moreover, the acid-catalyzed degradation reaction enables the spectrofluorimetric determination of ASP through the formation of its active metabolite salicylic acid (SA). The proposed chemometric method deals with convolution of emission data using 8-points sin xi polynomials (discrete Fourier functions) after the derivative treatment of these emission data. The first and second derivative curves (D1 & D2) were obtained first then convolution of these curves was done to obtain first and second derivative under Fourier functions curves (D1/FF) and (D2/FF). This new application was used for the resolution of the overlapped emission bands of the degradation products of both drugs to allow their simultaneous indirect determination in human urine. Not only this chemometric approach was applied to the emission data but also the obtained data were subjected to non-parametric linear regression analysis (Theil’s method). The proposed method was fully validated according to the ICH guidelines and it yielded linearity ranges as follows: 0.05-0.75 and 0.5-2.5 µg mL-1 for MTX and ASP respectively. It was found that the non-parametric method was superior over the parametric one in the simultaneous determination of MTX and ASP after the chemometric treatment of the emission spectra of their degradation products. The work combines the advantages of derivative and convolution using discrete Fourier function together with the reliability and efficacy of the non-parametric analysis of data. The achieved sensitivity along with the low values of LOD (0.01 and 0.06 µg mL-1) and LOQ (0.04 and 0.2 µg mL-1) for MTX and ASP respectively, by the second derivative under Fourier functions (D2/FF) were promising and guarantee its application for monitoring the two drugs in patients’ urine samples.Keywords: chemometrics, emission curves, derivative, convolution, Fourier transform, human urine, non-parametric regression, Theil’s method
Procedia PDF Downloads 43023525 Adopting Structured Mini Writing Retreats as a Tool for Undergraduate Researchers
Authors: Clare Cunningham
Abstract:
Whilst there is a strong global research base on the benefits of structured writing retreats and similar provisions, such as Shut Up and Write events, for academic staff and postgraduate researchers, very little has been published about the worth of such events for undergraduate students. This is despite the fact that, internationally, undergraduate student researchers experience similar pressures, distractions and feelings towards writing as those who are at more senior levels within the academy. This paper reports on a mixed-methods study with cohorts of third-year undergraduate students over the course of four academic years. This involved a range of research instruments adopted over the four years of the study. They include the administration of four questionnaires across three academic years, a collection of ethnographic recordings in the second year, and the collation of reflective journal entries and evaluations from all four years. The final two years of data collection took place during the period of Covid-19 restrictions when writing retreats moved to the virtual space which adds an additional dimension of interest to the analysis. The analysis involved the collation of quantitative questionnaire data to observe patterns in expressions of attitudes towards writing. Qualitative data were analysed thematically and used to corroborate and support the quantitative data when appropriate. The resulting data confirmed that one of the biggest challenges for undergraduate students mirrors those reported in the findings of studies focused on more experienced researchers. This is not surprising, especially given the number of undergraduate students who now work alongside their studies, as well as the increasing number who have caring responsibilities, but it has, as yet, been under-reported. The data showed that the groups of writing retreat participants all had very positive experiences, with accountability, a sense of community and procrastination avoidance some of the key aspects. The analysis revealed the sometimes transformative power of these events for a number of these students in terms of changing the way they viewed writing and themselves as writers. The data presented in this talk will support the proposal that retreats should much more widely be offered to undergraduate students across the world.Keywords: academic writing, students, undergraduates, writing retreat
Procedia PDF Downloads 19923524 Detecting Overdispersion for Mortality AIDS in Zero-inflated Negative Binomial Death Rate (ZINBDR) Co-infection Patients in Kelantan
Authors: Mohd Asrul Affedi, Nyi Nyi Naing
Abstract:
Overdispersion is present in count data, and basically when a phenomenon happened, a Negative Binomial (NB) is commonly used to replace a standard Poisson model. Analysis of count data event, such as mortality cases basically Poisson regression model is appropriate. Hence, the model is not appropriate when existing a zero values. The zero-inflated negative binomial model is appropriate. In this article, we modelled the mortality cases as a dependent variable by age categorical. The objective of this study to determine existing overdispersion in mortality data of AIDS co-infection patients in Kelantan.Keywords: negative binomial death rate, overdispersion, zero-inflation negative binomial death rate, AIDS
Procedia PDF Downloads 46323523 Using Geospatial Analysis to Reconstruct the Thunderstorm Climatology for the Washington DC Metropolitan Region
Authors: Mace Bentley, Zhuojun Duan, Tobias Gerken, Dudley Bonsal, Henry Way, Endre Szakal, Mia Pham, Hunter Donaldson, Chelsea Lang, Hayden Abbott, Leah Wilcynzski
Abstract:
Air pollution has the potential to modify the lifespan and intensity of thunderstorms and the properties of lightning. Using data mining and geovisualization, we investigate how background climate and weather conditions shape variability in urban air pollution and how this, in turn, shapes thunderstorms as measured by the intensity, distribution, and frequency of cloud-to-ground lightning. A spatiotemporal analysis was conducted in order to identify thunderstorms using high-resolution lightning detection network data. Over seven million lightning flashes were used to identify more than 196,000 thunderstorms that occurred between 2006 - 2020 in the Washington, DC Metropolitan Region. Each lightning flash in the dataset was grouped into thunderstorm events by means of a temporal and spatial clustering algorithm. Once the thunderstorm event database was constructed, hourly wind direction, wind speed, and atmospheric thermodynamic data were added to the initiation and dissipation times and locations for the 196,000 identified thunderstorms. Hourly aerosol and air quality data for the thunderstorm initiation times and locations were also incorporated into the dataset. Developing thunderstorm climatologies using a lightning tracking algorithm and lightning detection network data was found to be useful for visualizing the spatial and temporal distribution of urban augmented thunderstorms in the region.Keywords: lightning, urbanization, thunderstorms, climatology
Procedia PDF Downloads 7623522 Real-Time Network Anomaly Detection Systems Based on Machine-Learning Algorithms
Authors: Zahra Ramezanpanah, Joachim Carvallo, Aurelien Rodriguez
Abstract:
This paper aims to detect anomalies in streaming data using machine learning algorithms. In this regard, we designed two separate pipelines and evaluated the effectiveness of each separately. The first pipeline, based on supervised machine learning methods, consists of two phases. In the first phase, we trained several supervised models using the UNSW-NB15 data-set. We measured the efficiency of each using different performance metrics and selected the best model for the second phase. At the beginning of the second phase, we first, using Argus Server, sniffed a local area network. Several types of attacks were simulated and then sent the sniffed data to a running algorithm at short intervals. This algorithm can display the results of each packet of received data in real-time using the trained model. The second pipeline presented in this paper is based on unsupervised algorithms, in which a Temporal Graph Network (TGN) is used to monitor a local network. The TGN is trained to predict the probability of future states of the network based on its past behavior. Our contribution in this section is introducing an indicator to identify anomalies from these predicted probabilities.Keywords: temporal graph network, anomaly detection, cyber security, IDS
Procedia PDF Downloads 10323521 Diabetes Diagnosis Model Using Rough Set and K- Nearest Neighbor Classifier
Authors: Usiobaifo Agharese Rosemary, Osaseri Roseline Oghogho
Abstract:
Diabetes is a complex group of disease with a variety of causes; it is a disorder of the body metabolism in the digestion of carbohydrates food. The application of machine learning in the field of medical diagnosis has been the focus of many researchers and the use of recognition and classification model as a decision support tools has help the medical expert in diagnosis of diseases. Considering the large volume of medical data which require special techniques, experience, and high diagnostic skill in the diagnosis of diseases, the application of an artificial intelligent system to assist medical personnel in order to enhance their efficiency and accuracy in diagnosis will be an invaluable tool. In this study will propose a diabetes diagnosis model using rough set and K-nearest Neighbor classifier algorithm. The system consists of two modules: the feature extraction module and predictor module, rough data set is used to preprocess the attributes while K-nearest neighbor classifier is used to classify the given data. The dataset used for this model was taken for University of Benin Teaching Hospital (UBTH) database. Half of the data was used in the training while the other half was used in testing the system. The proposed model was able to achieve over 80% accuracy.Keywords: classifier algorithm, diabetes, diagnostic model, machine learning
Procedia PDF Downloads 33623520 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 11523519 MARISTEM: A COST Action Focused on Stem Cells of Aquatic Invertebrates
Authors: Arzu Karahan, Loriano Ballarin, Baruch Rinkevich
Abstract:
Marine invertebrates, the highly diverse phyla of multicellular organisms, represent phenomena that are either not found or highly restricted in the vertebrates. These include phenomena like budding, fission, a fusion of ramets, and high regeneration power, such as the ability to create whole new organisms from either tiny parental fragment, many of which are controlled by totipotent, pluripotent, and multipotent stem cells. Thus, there is very much that can be learned from these organisms on the practical and evolutionary levels, further resembling Darwin's words, “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change”. The ‘stem cell’ notion highlights a cell that has the ability to continuously divide and differentiate into various progenitors and daughter cells. In vertebrates, adult stem cells are rare cells defined as lineage-restricted (multipotent at best) with tissue or organ-specific activities that are located in defined niches and further regulate the machinery of homeostasis, repair, and regeneration. They are usually categorized by their morphology, tissue of origin, plasticity, and potency. The above description not always holds when comparing the vertebrates with marine invertebrates’ stem cells that display wider ranges of plasticity and diversity at the taxonomic and the cellular levels. While marine/aquatic invertebrates stem cells (MISC) have recently raised more scientific interest, the know-how is still behind the attraction they deserve. MISC, not only are highly potent but, in many cases, are abundant (e.g., 1/3 of the entire animal cells), do not locate in permanent niches, participates in delayed-aging and whole-body regeneration phenomena, the knowledge of which can be clinically relevant. Moreover, they have massive hidden potential for the discovery of new bioactive molecules that can be used for human health (antitumor, antimicrobial) and biotechnology. The MARISTEM COST action (Stem Cells of Marine/Aquatic Invertebrates: From Basic Research to Innovative Applications) aims to connect the European fragmented MISC community. Under this scientific umbrella, the action conceptualizes the idea for adult stem cells that do not share many properties with the vertebrates’ stem cells, organizes meetings, summer schools, and workshops, stimulating young researchers, supplying technical and adviser support via short-term scientific studies, making new bridges between the MISC community and biomedical disciplines.Keywords: aquatic/marine invertebrates, adult stem cell, regeneration, cell cultures, bioactive molecules
Procedia PDF Downloads 16923518 A Critical Analysis on Gaps Associated with Culture Policy Milieu Governing Traditional Male Circumcision in the Eastern Cape, South Africa
Authors: Thanduxolo Nomngcoyiya, Simon M. Kang’ethe
Abstract:
The paper aimed to critically analyse gaps pertaining to the cultural policy environments governing traditional male circumcision in the Eastern Cape as exemplified by an empirical case study. The original study which this paper is derived from utilized qualitative paradigm; and encompassed 28 participants. It used in-depth one-on-one interviews complemented by focus group discussions and key informants as a method of data collection. It also adopted interview guide as a data collection instrument. The original study was cross-sectional in nature, and the data was audio recorded and transcribed later during the data analysis and coding process. The study data analysis was content thematic analysis and identified the following key major findings on the culture of male circumcision policy: Lack of clarity on culture of male circumcision policy operations; Myths surrounding procedures on culture of male circumcision; Divergent views on cultural policies between government and male circumcision custodians; Unclear cultural policies on selection criteria of practitioners; and Lack of policy enforcement and implementation on transgressors of culture of male circumcision. It recommended: a stringent selection criteria of practitioners; a need to carry out death-free male circumcision; a need for male circumcision stakeholders to work with other culture and tradition-friendly stakeholders.Keywords: human rights, policy enforcement, traditional male circumcision, traditional surgeons and nurses
Procedia PDF Downloads 29723517 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 13923516 Modeling Local Warming Trend: An Application of Remote Sensing Technique
Authors: Khan R. Rahaman, Quazi K. Hassan
Abstract:
Global changes in climate, environment, economies, populations, governments, institutions, and cultures converge in localities. Changes at a local scale, in turn, contribute to global changes as well as being affected by them. Our hypothesis is built on a consideration that temperature does vary at local level (i.e., termed as local warming) in comparison to the predicted models at the regional and/or global scale. To date, the bulk of the research relating local places to global climate change has been top-down, from the global toward the local, concentrating on methods of impact analysis that use as a starting point climate change scenarios derived from global models, even though these have little regional or local specificity. Thus, our focus is to understand such trends over the southern Alberta, which will enable decision makers, scientists, researcher community, and local people to adapt their policies based on local level temperature variations and to act accordingly. Specific objectives in this study are: (i) to understand the local warming (temperature in particular) trend in context of temperature normal during the period 1961-2010 at point locations using meteorological data; (ii) to validate the data by using specific yearly data, and (iii) to delineate the spatial extent of the local warming trends and understanding influential factors to adopt situation by local governments. Existing data has brought the evidence of such changes and future research emphasis will be given to validate this hypothesis based on remotely sensed data (i.e. MODIS product by NASA).Keywords: local warming, climate change, urban area, Alberta, Canada
Procedia PDF Downloads 346