Search results for: approximate graph matching
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1264

Search results for: approximate graph matching

364 Quality Parameters of Offset Printing Wastewater

Authors: Kiurski S. Jelena, Kecić S. Vesna, Aksentijević M. Snežana

Abstract:

Samples of tap and wastewater were collected in three offset printing facilities in Novi Sad, Serbia. Ten physicochemical parameters were analyzed within all collected samples: pH, conductivity, m - alkalinity, p - alkalinity, acidity, carbonate concentration, hydrogen carbonate concentration, active oxygen content, chloride concentration and total alkali content. All measurements were conducted using the standard analytical and instrumental methods. Comparing the obtained results for tap water and wastewater, a clear quality difference was noticeable, since all physicochemical parameters were significantly higher within wastewater samples. The study also involves the application of simple linear regression analysis on the obtained dataset. By using software package ORIGIN 5 the pH value was mutually correlated with other physicochemical parameters. Based on the obtained values of Pearson coefficient of determination a strong positive correlation between chloride concentration and pH (r = -0.943), as well as between acidity and pH (r = -0.855) was determined. In addition, statistically significant difference was obtained only between acidity and chloride concentration with pH values, since the values of parameter F (247.634 and 182.536) were higher than Fcritical (5.59). In this way, results of statistical analysis highlighted the most influential parameter of water contamination in offset printing, in the form of acidity and chloride concentration. The results showed that variable dependence could be represented by the general regression model: y = a0 + a1x+ k, which further resulted with matching graphic regressions.

Keywords: pollution, printing industry, simple linear regression analysis, wastewater

Procedia PDF Downloads 228
363 A Case Study of Clinicians’ Perceptions of Enterprise Content Management at Tygerberg Hospital

Authors: Temitope O. Tokosi

Abstract:

Healthcare is a human right. The sensitivity of health issues has necessitated the introduction of Enterprise Content Management (ECM) at district hospitals in the Western Cape Province of South Africa. The objective is understanding clinicians’ perception of ECM at their workplace. It is a descriptive case study design of constructivist paradigm. It employed a phenomenological data analysis method using a pattern matching deductive based analytical procedure. Purposive and s4nowball sampling techniques were applied in selecting participants. Clinicians expressed concerns and frustrations using ECM such as, non-integration with other hospital systems. Inadequate access points to ECM. Incorrect labelling of notes and bar-coding causes more time wasted in finding information. System features and/or functions (such as search and edit) are not possible. Hospital management and clinicians are not constantly interacting and discussing. Information turnaround time is unacceptably lengthy. Resolving these problems would involve a positive working relationship between hospital management and clinicians. In addition, prioritising the problems faced by clinicians in relation to relevance can ensure problem-solving in order to meet clinicians’ expectations and hospitals’ objective. Clinicians’ perception should invoke attention from hospital management with regards technology use. The study’s results can be generalised across clinician groupings exposed to ECM at various district hospitals because of professional and hospital homogeneity.

Keywords: clinician, electronic content management, hospital, perception, technology

Procedia PDF Downloads 229
362 Views from Shores Past: Palaeogeographic Reconstructions as an Aid for Interpreting the Movement of Early Modern Humans on and between the Islands of Wallacea

Authors: S. Kealy, J. Louys, S. O’Connor

Abstract:

The island archipelago that stretches between the continents of Sunda (Southeast Asia) and Sahul (Australia - New Guinea) and comprising much of modern-day Indonesia as well as Timor-Leste, represents the biogeographic region of Wallacea. The islands of Wallaea are significant archaeologically as they have never been connected to the mainlands of either Sunda or Sahul, and thus the colonization by early modern humans of these islands and subsequently Australia and New Guinea, would have necessitated some form of water crossings. Accurate palaeogeographic reconstructions of the Wallacean Archipelago for this time are important not only for modeling likely routes of colonization but also for reconstructing likely landscapes and hence resources available to the first colonists. Here we present five digital reconstructions of coastal outlines of Wallacea and Sahul (Australia and New Guinea) for the periods 65, 60, 55, 50, and 45,000 years ago using the latest bathometric chart and a sea-level model that is adjusted to account for the average uplift rate known from Wallacea. This data was also used to reconstructed island areal extent as well as topography for each time period. These reconstructions allowed us to determine the distance from the coast and relative elevation of the earliest archaeological sites for each island where such records exist. This enabled us to approximate how much effort exploitation of coastal resources would have taken for early colonists, and how important such resources were. These reconstructions also allowed us to estimate visibility for each island in the archipelago, and to model how intervisible each island was during the period of likely human colonisation. We demonstrate how these models provide archaeologists with an important basis for visualising this ancient landscape and interpreting how it was originally viewed, traversed and exploited by its earliest modern human inhabitants.

Keywords: Wallacea, palaeogeographic reconstructions, islands, intervisibility

Procedia PDF Downloads 196
361 Organic Matter Removal in Urban and Agroindustry Wastewater by Chemical Precipitation Process

Authors: Karina Santos Silvério, Fátima Carvalho, Maria Adelaide Almeida

Abstract:

The impacts caused by anthropogenic actions on the water environment have been one of the main challenges of modern society. Population growth, added to water scarcity and climate change, points to a need to increase the resilience of production systems to increase efficiency regarding the management of wastewater generated in the different processes. Based on this context, the study developed under the NETA project (New Strategies in Wastewater Treatment) aimed to evaluate the efficiency of the Chemical Precipitation Process (CPP), using the hydrated lime (Ca(OH )₂) as a reagent in wastewater from the agroindustry sector, namely swine wastewater, slaughterhouse and urban wastewater, in order to make the productive means 100% circular, causing a direct positive impact on the environment. The purpose of CPP is to innovate in the field of effluent treatment technologies, as it allows rapid application and is economically profitable. In summary, the study was divided into four main stages: 1) Application of the reagent in a single step, raising the pH to 12.5 2) Obtaining sludge and treated effluent. 3) Natural neutralization of the effluent through Carbonation using atmospheric CO₂. 4) Characterization and evaluation of the feasibility of the chemical precipitation technique in the treatment of different wastewaters through the technique of determining the chemical oxygen demand (COD) and other supporting physical-chemical parameters. The results showed an approximate average removal efficiency above 80% for all effluents, highlighting the swine effluent with 90% removal, followed by urban effluent with 88% and slaughterhouse with 81% on average. Significant improvement was also obtained with regard to color and odor removal after Carbonation to pH 8.00.

Keywords: agroindustry wastewater, urban wastewater, natural carbonatation, chemical precipitation technique

Procedia PDF Downloads 71
360 Incidence of Cancer in Patients with Alzheimer's Disease: A 11-Year Nationwide Population-Based Study

Authors: Jun Hong Lee

Abstract:

Background: Alzheimer`s disease (AD) I: creases with age and is characterized by the premature progressive loss of neuronal cell. In contrast, cancer cells have inappropriate cell proliferation and resistance to cell death. Objective: We evaluated the association between cancer and AD and also examined the specific types of cancer. Patients and Methods/Material and Methods: This retrospective, nationwide, longitudinal study used National Health Insurance Service – Senior cohort (NHIS-Senior) 2002-2013, which was released by the KNHIS in 2016, comprising 550,000 random subjects who were selected from over than 60. The study included a cohort of 4,408 patients who were first diagnoses as AD between 2003 and 2005. To match each dementia patient, 19,150 subjects were selected from the database by Propensity Score Matching. Results: We enrolled 4,790 patients for analysis in this cohort and the prevalence of AD was higher in female (19.29%) than in male (17.71%). A higher prevalence of AD was observed in the 70-84 year age group and in the higher income status group. A total of 540 cancers occurred within the observation interval. Overall cancer was less frequent in those with AD (12.25%) than in the control (18.46%), with HR 0.704 (95% Confidence Intervals (CIs)=0.0.64-0.775, p-Value < 0.0001). Conclusion: Our data showed a decreased incidence of overall cancers in patients with AD similar to previous studies. Patients with AD had a significantly decreased risk of colon & rectum, lung and stomach cancer. This finding lower than but consistent with Western countries. We need further investigation of genetic evidence linking AD to cancer.

Keywords: Alzheimer, cancer, nationwide, longitudinal study

Procedia PDF Downloads 163
359 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 452
358 Traffic Prediction with Raw Data Utilization and Context Building

Authors: Zhou Yang, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao

Abstract:

Traffic prediction is essential in a multitude of ways in modern urban life. The researchers of earlier work in this domain carry out the investigation chiefly with two major focuses: (1) the accurate forecast of future values in multiple time series and (2) knowledge extraction from spatial-temporal correlations. However, two key considerations for traffic prediction are often missed: the completeness of raw data and the full context of the prediction timestamp. Concentrating on the two drawbacks of earlier work, we devise an approach that can address these issues in a two-phase framework. First, we utilize the raw trajectories to a greater extent through building a VLA table and data compression. We obtain the intra-trajectory features with graph-based encoding and the intertrajectory ones with a grid-based model and the technique of back projection that restore their surrounding high-resolution spatial-temporal environment. To the best of our knowledge, we are the first to study direct feature extraction from raw trajectories for traffic prediction and attempt the use of raw data with the least degree of reduction. In the prediction phase, we provide a broader context for the prediction timestamp by taking into account the information that are around it in the training dataset. Extensive experiments on several well-known datasets have verified the effectiveness of our solution that combines the strength of raw trajectory data and prediction context. In terms of performance, our approach surpasses several state-of-the-art methods for traffic prediction.

Keywords: traffic prediction, raw data utilization, context building, data reduction

Procedia PDF Downloads 116
357 Torque Loss Prediction Test Method of Bolted Joints in Heavy Commercial Vehicles

Authors: Volkan Ayik

Abstract:

Loosening as a result of torque loss in bolted joints is one of the most encountered problems resulting in loss of connection between parts. The main reason for this is the dynamic loads to which the joints are subjected while the vehicle is moving. In particular, vibration-induced loads can loosen the joints in any size and geometry. The aim of this study is to study an improved method due to road-induced vibration in heavy commercial vehicles for estimating the vibration performance of bolted joints of the components connected to the chassis, before conducting prototype level vehicle structural strength tests on a proving ground. The frequency and displacements caused by the road conditions-induced vibration loads have been determined for the parts connected to the chassis, and various experimental design scenarios have been formed by matching specific components and vibration behaviors. In the studies, the performance of the torque, washer, test displacement, and test frequency parameters were observed by maintaining the connection characteristics on the vehicle, and the sensitivity ratios for these variables were calculated. As a result of these experimental design findings, tests performed on a developed device based on Junker’s vibration device and proving ground conditions versus test correlation levels were found.

Keywords: bolted joints, junker’s test, loosening failure, torque loss

Procedia PDF Downloads 121
356 Numerical Simulation of a Point Absorber Wave Energy Converter Using OpenFOAM in Indian Scenario

Authors: Pooja Verma, Sumana Ghosh

Abstract:

There is a growing need for alternative way of power generation worldwide. The reason can be attributed to limited resources of fossil fuels, environmental pollution, increasing cost of conventional fuels, and lower efficiency of conversion of energy in existing systems. In this context, one of the potential alternatives for power generation is wave energy. However, it is difficult to estimate the amount of electrical energy generation in an irregular sea condition by experiment and or analytical methods. Therefore in this work, a numerical wave tank is developed using the computational fluid dynamics software Open FOAM. In this software a specific utility known as waves2Foam utility is being used to carry out the simulation work. The computational domain is a tank of dimension: 5m*1.5m*1m with a floating object of dimension: 0.5m*0.2m*0.2m. Regular waves are generated at the inlet of the wave tank according to Stokes second order theory. The main objective of the present study is to validate the numerical model against existing experimental data. It shows a good matching with the existing experimental data of floater displacement. Later the model is exploited to estimate energy extraction due to the movement of such a point absorber in real sea conditions. Scale down the wave properties like wave height, wave length, etc. are used as input parameters. Seasonal variations are also considered.

Keywords: OpenFOAM, numerical wave tank, regular waves, floating object, point absorber

Procedia PDF Downloads 346
355 Hot Corrosion and Oxidation Degradation Mechanism of Turbine Materials in a Water Vapor Environment at a Higher Temperature

Authors: Mairaj Ahmad, L. Paglia, F. Marra, V. Genova, G. Pulci

Abstract:

This study employed Rene N4 and FSX 414 superalloys, which are used in numerous turbine engine components due of their high strength, outstanding fatigue, creep, thermal, and corrosion-resistant properties. An in-depth examination of corrosion mechanisms with vapor present at high temperature is necessary given the industrial trend toward introducing increasing amounts of hydrogen into combustion chambers in order to boost power generation and minimize pollution in contrast to conventional fuels. These superalloys were oxidized in recent tests for 500, 1000, 2000, 3000 and 4000 hours at 982±5°C temperatures with a steady airflow at a flow rate of 10L/min and 1.5 bar pressure. These superalloys were also examined for wet corrosion for 500, 1000, 2000, 3000, and 4000 hours in a combination of air and water vapor flowing at a 10L/min rate. Weight gain, X-ray diffraction (XRD), scanning electron microscopy (SEM), and energy dispersive x-ray spectroscopy (EDS) were used to assess the oxidation and heat corrosion resistance capabilities of these alloys before and after 500, 1000, and 2000 hours. The oxidation/corrosion processes that accompany the formation of these oxide scales are shown in the graph of mass gain vs time. In both dry and wet oxidation, oxides like Al2O3, TiO2, NiCo2O4, Ni3Al, Ni3Ti, Cr2O3, MnCr2O4, CoCr2O4, and certain volatile compounds notably CrO2(OH)2, Cr(OH)3, Fe(OH)2, and Si(OH)4 are formed.

Keywords: hot corrosion, oxidation, turbine materials, high temperature corrosion, super alloys

Procedia PDF Downloads 78
354 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 377
353 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape

Authors: Chen Bo, Wen Zengping

Abstract:

Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.

Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape

Procedia PDF Downloads 284
352 Autonomous Ground Vehicle Navigation Based on a Single Camera and Image Processing Methods

Authors: Auday Al-Mayyahi, Phil Birch, William Wang

Abstract:

A vision system-based navigation for autonomous ground vehicle (AGV) equipped with a single camera in an indoor environment is presented. A proposed navigation algorithm has been utilized to detect obstacles represented by coloured mini- cones placed in different positions inside a corridor. For the recognition of the relative position and orientation of the AGV to the coloured mini cones, the features of the corridor structure are extracted using a single camera vision system. The relative position, the offset distance and steering angle of the AGV from the coloured mini-cones are derived from the simple corridor geometry to obtain a mapped environment in real world coordinates. The corridor is first captured as an image using the single camera. Hence, image processing functions are then performed to identify the existence of the cones within the environment. Using a bounding box surrounding each cone allows to identify the locations of cones in a pixel coordinate system. Thus, by matching the mapped and pixel coordinates using a projection transformation matrix, the real offset distances between the camera and obstacles are obtained. Real time experiments in an indoor environment are carried out with a wheeled AGV in order to demonstrate the validity and the effectiveness of the proposed algorithm.

Keywords: autonomous ground vehicle, navigation, obstacle avoidance, vision system, single camera, image processing, ultrasonic sensor

Procedia PDF Downloads 297
351 Frontier Dynamic Tracking in the Field of Urban Plant and Habitat Research: Data Visualization and Analysis Based on Journal Literature

Authors: Shao Qi

Abstract:

The article uses the CiteSpace knowledge graph analysis tool to sort and visualize the journal literature on urban plants and habitats in the Web of Science and China National Knowledge Infrastructure databases. Based on a comprehensive interpretation of the visualization results of various data sources and the description of the intrinsic relationship between high-frequency keywords using knowledge mapping, the research hotspots, processes and evolution trends in this field are analyzed. Relevant case studies are also conducted for the hotspot contents to explore the means of landscape intervention and synthesize the understanding of research theories. The results show that (1) from 1999 to 2022, the research direction of urban plants and habitats gradually changed from focusing on plant and animal extinction and biological invasion to the field of human urban habitat creation, ecological restoration, and ecosystem services. (2) The results of keyword emergence and keyword growth trend analysis show that habitat creation research has shown a rapid and stable growth trend since 2017, and ecological restoration has gained long-term sustained attention since 2004. The hotspots of future research on urban plants and habitats in China may focus on habitat creation and ecological restoration.

Keywords: research trends, visual analysis, habitat creation, ecological restoration

Procedia PDF Downloads 56
350 Anomaly Detection in Financial Markets Using Tucker Decomposition

Authors: Salma Krafessi

Abstract:

The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.

Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models

Procedia PDF Downloads 50
349 Contemporary Army Prints for Women’s Wear Kurti

Authors: Shaleni Bajpai, Nancy Stephan

Abstract:

Various designs of women’s kurtis with different styles, motifs and prints were available in market but none of the kurtis was found in army print. Mostly army prints are used for men’s wear like jackets, trousers, caps, bags. The main colours available in military prints were beige, parrot green, red, dark blue, light blue, orange, bottle green, pink and the original military green colour. As the original camouflage is banned in civil wears so the different variety and colours were used in this study to popularize army prints in women’s wear. The aim of this project was to construct different styles of women kurti’s with various colours of different military prints. Mood board, inspiration and colour board was prepared to design the kurtis. The fabric used for construction was army printed poplin and crepe. The designing and construction of kurti’s were divided into two categories such as - casual and party wear. Casual wear had simple silhouette like a-line, high-low and waist coat style whereas party wear included princess line, panelled and bandhani style. Structured questionnaire was prepared to assess the acceptance of newly designed kurtis with respect to colour combination, overall appearance and cost. Purposively sampling method was adopted for selection of respondents. Opinion was taken from 100 women of various age groups. The result and analysis was presented through graph and percentage. Kurtis in army print of both the categories were appreciated by the respondents.

Keywords: army, kurti, casual wear, party wear

Procedia PDF Downloads 294
348 Observation of the Flow Behavior for a Rising Droplet in a Mini-Slot

Authors: H. Soltani, J. Hadfield, M. Redmond, D. S. Nobes

Abstract:

The passage of oil droplets through a vertical mini-slot were investigated in this study. Oil-in-water emulsion can undergo coalescence of finer oil droplets forming droplets of a size that need to be considered individually. This occurs in a number of industrial processes and has important consequences at a scale where both body and surfaces forces are relevant. In the study, two droplet diameters of smaller than the slot width and a relatively larger diameter where the oil droplet can interact directly with the slot wall were generated. To monitor fluid motion, a particle shadow velocimetry (PSV) imaging technique was used to study fluid flow motion inside and around a single oil droplet rising in a net co-flow. The droplet was a transparent canola oil and the surrounding working fluid was glycerol, adjusted to allow a matching of refractive index between the two fluids. Particles seeded in both fluids were observed with the PSV system allowing the capture of the velocity field both within the droplet and in the surrounds. The effect of droplet size on the droplet internal circulation was observed. Part of the study was related the potential generation of flow structures, such as von Karman vortex shedding already observed in rising droplets in infinite reservoirs and their interaction with the mini-channel. Results show that two counter-rotating vortices exist inside the droplets as they pass through slot. The vorticity map analysis shows that the droplet of relatively larger size has a stronger internal circulation.

Keywords: rising droplet, rectangular orifice, particle shadow velocimetry, match refractive index

Procedia PDF Downloads 166
347 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 497
346 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 170
345 Computational Identification of Signalling Pathways in Protein Interaction Networks

Authors: Angela U. Makolo, Temitayo A. Olagunju

Abstract:

The knowledge of signaling pathways is central to understanding the biological mechanisms of organisms since it has been identified that in eukaryotic organisms, the number of signaling pathways determines the number of ways the organism will react to external stimuli. Signaling pathways are studied using protein interaction networks constructed from protein-protein interaction data obtained using high throughput experimental procedures. However, these high throughput methods are known to produce very high rates of false positive and negative interactions. In order to construct a useful protein interaction network from this noisy data, computational methods are applied to validate the protein-protein interactions. In this study, a computational technique to identify signaling pathways from a protein interaction network constructed using validated protein-protein interaction data was designed. A weighted interaction graph of the Saccharomyces cerevisiae (Baker’s Yeast) organism using the proteins as the nodes and interactions between them as edges was constructed. The weights were obtained using Bayesian probabilistic network to estimate the posterior probability of interaction between two proteins given the gene expression measurement as biological evidence. Only interactions above a threshold were accepted for the network model. A pathway was formalized as a simple path in the interaction network from a starting protein and an ending protein of interest. We were able to identify some pathway segments, one of which is a segment of the pathway that signals the start of the process of meiosis in S. cerevisiae.

Keywords: Bayesian networks, protein interaction networks, Saccharomyces cerevisiae, signalling pathways

Procedia PDF Downloads 530
344 The Difference of Learning Outcomes in Reading Comprehension between Text and Film as The Media in Indonesian Language for Foreign Speaker in Intermediate Level

Authors: Siti Ayu Ningsih

Abstract:

This study aims to find the differences outcomes in learning reading comprehension with text and film as media on Indonesian Language for foreign speaker (BIPA) learning at intermediate level. By using quantitative and qualitative research methods, the respondent of this study is a single respondent from D'Royal Morocco Integrative Islamic School in grade nine from secondary level. Quantitative method used to calculate the learning outcomes that have been given the appropriate action cycle, whereas qualitative method used to translate the findings derived from quantitative methods to be described. The technique used in this study is the observation techniques and testing work. Based on the research, it is known that the use of the text media is more effective than the film for intermediate level of Indonesian Language for foreign speaker learner. This is because, when using film the learner does not have enough time to take note the difficult vocabulary and don't have enough time to look for the meaning of the vocabulary from the dictionary. While the use of media texts shows the better effectiveness because it does not require additional time to take note the difficult words. For the words that are difficult or strange, the learner can immediately find its meaning from the dictionary. The presence of the text is also very helpful for Indonesian Language for foreign speaker learner to find the answers according to the questions more easily. By matching the vocabulary of the question into the text references.

Keywords: Indonesian language for foreign speaker, learning outcome, media, reading comprehension

Procedia PDF Downloads 189
343 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components

Authors: Andras Dezső, Peter Baumli, George Kaptay

Abstract:

The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.

Keywords: phosphorous, steel, segregation, thermo-calc software

Procedia PDF Downloads 620
342 Collaboration of UNFPA and USAID to Mobilize Domestic Government Resources for Contraceptive Procurement in Madagascar

Authors: Josiane Yaguibou, Ngoy Kishimba, Issiaka v. Coulibaly, Sabrina Pestilli, Falinirina Razanalison, Hantanirina Andremanisa

Abstract:

Background: In recent years, Madagascar has faced a significant reduction in donors’ financial resources for the purchase of contraceptive products to meet the family planning needs of the population. In order to ensure the sustainability of the family planning program in the current context, UNFPA Madagascar engaged in a series of initiatives with the ultimate scope of identifying sustainable financing mechanisms for the program. Program intervention: UNFPA Madagascar established a strict collaboration with USAID to engage in a series of joint advocacy and resource mobilization activities with the government. The following initiatives were conducted: (i) Organization of a high-level Round Table to engage the government; (ii) Support to the government in renewing the FP2030 Commitments; (iii) Signature of the Country Compact 2022-2024; (iv) Allocation of government funds in 2022 and 2023 of over 829,222 USD; (v) Obtaining a Matching Fund of 1.5 million USD from UNFPA to encourage the government to allocate resources for the purchase of contraceptive products. Program Implications: The collaboration and the joint advocacy made it possible to (i) have budgetary allocations from the government to purchase products in 2022 and 2023 with a significant reduction in financing gaps; (ii) to convince the government to seek additional financing from partners such as the World Bank which granted more than 8 million USD for the purchase of products; (iii) reduce stock shortages from more than 30% to 15%.

Keywords: UNFPA, USAID, collaboration, contraceptives

Procedia PDF Downloads 60
341 Graph-Based Semantical Extractive Text Analysis

Authors: Mina Samizadeh

Abstract:

In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.

Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis

Procedia PDF Downloads 60
340 Development of a Plug-In Hybrid Powertrain System with Double Continuously Variable Transmissions

Authors: Cheng-Chi Yu, Chi-Shiun Chiou

Abstract:

This study developed a plug-in hybrid powertrain system which consisted of two continuous variable transmissions. By matching between the engine, motor, generator, and dual continuous variable transmissions, this integrated power system can take advantages of the components. The hybrid vehicle can be driven by the internal combustion engine, or electric motor alone, or by these two power sources together when the vehicle is driven in hard acceleration or high load. The energy management of this integrated hybrid system controls the power systems based on rule-based control strategy to achieve better fuel economy. When the vehicle driving power demand is low, the internal combustion engine is operating in the low efficiency region, so the internal combustion engine is shut down, and the vehicle is driven by motor only. When the vehicle driving power demand is high, internal combustion engine would operate in the high efficiency region; then the vehicle could be driven by internal combustion engine. This strategy would operate internal combustion engine only in optimal efficiency region to improve the fuel economy. In this research, the vehicle simulation model was built in MATLAB/ Simulink environment. The analysis results showed that the power coupled efficiency of the hybrid powertrain system with dual continuous variable transmissions was better than that of the Honda hybrid system on the market.

Keywords: plug-in hybrid power system, fuel economy, performance, continuously variable transmission

Procedia PDF Downloads 279
339 The Role of Attachment and Dyadic Coping in Shaping Relational Intimacy

Authors: Anna Wendolowska, Dorota Czyzowska

Abstract:

An intimate relationship is a significant factor that influences romantic partners’ well-being. In the face of stress, avoidant partners often employ a defense-against-intimacy strategy, leading to reduced relationship satisfaction, intimacy, interdependence, and longevity. Dyadic coping can buffer the negative effects of stress on relational satisfaction. Emotional competence mediates the relationship between insecure attachment and intimacy. In the current study, the link between attachment, different forms of dyadic coping, and various aspects of relationship satisfaction was examined. Both partners completed the attachment style questionnaire, the well matching couple questionnaire, and the dyadic coping inventory. The data was analyzed using the actor–partner interdependence model. The results highlighted a negative association between insecure-avoidant attachment style and intimacy. The actor effects of avoidant attachment on relational intimacy for women and for men were significant, whilst the partner effects for both spouses were not significant. The emotion-focused common dyadic coping moderated the relationship between avoidance of attachment and the partner's sense of intimacy. After controlling for the emotion-focused common dyadic coping, the actor effect of attachment on intimacy for men was slightly weaker, and the actor effect for women turned out to be insignificant. The emotion-focused common dyadic coping weakened the negative association between insecure attachment and relational intimacy. The impact of adult attachment and dyadic coping significantly contributes to subjective relational well-being.

Keywords: adult attachment, dyadic coping, relational intimacy, relationship satisfaction

Procedia PDF Downloads 151
338 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 154
337 Clarifying the Possible Symptomatic Pathway of Comorbid Depression, Anxiety, and Stress Among Adolescents Exposed to Childhood Trauma: Insight from the Network Approach

Authors: Xinyuan Zou, Qihui Tang, Shujian Wang, Yulin Huang, Jie Gui, Xiangping Liu, Gang Liu, Yanqiang Tao

Abstract:

Childhood trauma can have a long-lasting influence on individuals and contribute to mental disorders, including depression and anxiety. The current study aimed to explore the symptomatic and developmental patterns of depression, anxiety, and stress among adolescents who have suffered from childhood trauma. A total of 3,598 college students (female = 1,617 (44.94%), Mean Age = 19.68, SD Age = 1.35) in China completed the Childhood Trauma Questionnaire (CTQ) and the Depression, Anxiety, and Stress Scales (DASS-21), and 2,337 participants met the selection standard based on the cut-off scores of the CTQ. The symptomatic network and directed acyclic graph (DAG) network approaches were used. The results revealed that males reported experiencing significantly more physical abuse, physical neglect, emotional neglect, and sexual abuse compared to females. However, females scored significantly higher than males on all items of DASS-21, except for “Worthless”. No significant difference between the two genders was observed in the network structure and global strength. Meanwhile, among all participants, “Down-hearted” and “Agitated” appeared to be the most interconnected symptoms, the bridge symptoms in the symptom network, as well as the most vital symptoms in the DAG network. Apart from that, “No-relax” also served as the most prominent symptom in the DAG network. The results suggested that intervention targeted at assisting adolescents in developing more adaptive coping strategies with stress and regulating emotion could benefit the alleviation of comorbid depression, anxiety, and stress.

Keywords: symptom network, childhood trauma, depression, anxiety, stress

Procedia PDF Downloads 47
336 Cognitive Model of Analogy Based on Operation of the Brain Cells: Glial, Axons and Neurons

Authors: Ozgu Hafizoglu

Abstract:

Analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with attributional, deep structural, casual relations that are essential to learning, to innovation in artificial worlds, and to discovery in science. Cognitive Model of Analogy (CMA) leads and creates information pattern transfer within and between domains and disciplines in science. This paper demonstrates the Cognitive Model of Analogy (CMA) as an evolutionary approach to scientific research. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions. In this paper, the model of analogical reasoning is created based on brain cells, their fractal, and operational forms within the system itself. Visualization techniques are used to show correspondences. Distinct phases of the problem-solving processes are divided thusly: encoding, mapping, inference, and response. The system is revealed relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain cells: glial cells, axons, axon terminals, and neurons, relative to matching conditions of analogical reasoning and relational information. It’s found that encoding, mapping, inference, and response processes in four-term analogical reasoning are corresponding with the fractal and operational forms of brain cells: glial, axons, and neurons.

Keywords: analogy, analogical reasoning, cognitive model, brain and glials

Procedia PDF Downloads 177
335 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model

Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech

Abstract:

Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.

Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM

Procedia PDF Downloads 131