Search results for: small baseline subset algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9154

Search results for: small baseline subset algorithm

8314 A New Method to Winner Determination for Economic Resource Allocation in Cloud Computing Systems

Authors: Ebrahim Behrouzian Nejad, Rezvan Alipoor Sabzevari

Abstract:

Cloud computing systems are large-scale distributed systems, so that they focus more on large scale resource sharing, cooperation of several organizations and their use in new applications. One of the main challenges in this realm is resource allocation. There are many different ways to resource allocation in cloud computing. One of the common methods to resource allocation are economic methods. Among these methods, the auction-based method has greater prominence compared with Fixed-Price method. The double combinatorial auction is one of the proper ways of resource allocation in cloud computing. This method includes two phases: winner determination and resource allocation. In this paper a new method has been presented to determine winner in double combinatorial auction-based resource allocation using Imperialist Competitive Algorithm (ICA). The experimental results show that in our new proposed the number of winner users is higher than genetic algorithm. On other hand, in proposed algorithm, the number of winner providers is higher in genetic algorithm.

Keywords: cloud computing, resource allocation, double auction, winner determination

Procedia PDF Downloads 355
8313 Practice of Supply Chain Management in Local SMEs

Authors: Oualid Kherbach, Marian Liviu Mocan, Amine Ghoumrassi, Cristian Dumitrache

Abstract:

The Globalization system and the development of economy, e-business, and introduction of new technologies formation create new challenges to all organizations particularly for small and medium enterprises (SMEs). Many studies on supply chain management (SCM) focus on large companies with universal operations employing high-stage information technology. These make a gap in the knowing of how SMEs use and practice supply chain management. In this screenplay, successful practices of supply chain management (SCM) can give SMEs an edge over their competitors. However, SMEs in Romania and Balkan countries face problems in SCM implementation and practices due to lack of resources and direction. The objectives of this research highlight the supply chain management practices of the small and medium enterprise strip in Romania and understand how SMEs manage and use SCM. This study Checks the potential existence of systematic differences between small businesses and medium-sized businesses with regard to supply chain management practices and the application of supply management has contributed to the improvement performance and increase the profitability of companies such as increasing the market share and improving the level of clients.

Keywords: globalization, small and medium enterprises, supply chain management, practices

Procedia PDF Downloads 360
8312 Roadway Infrastructure and Bus Safety

Authors: Richard J. Hanowski, Rebecca L. Hammond

Abstract:

Very few studies have been conducted to investigate safety issues associated with motorcoach/bus operations. The current study investigates the impact that roadway infrastructure, including locality, roadway grade, traffic flow and traffic density, have on bus safety. A naturalistic driving study was conducted in the U.S.A that involved 43 motorcoaches. Two fleets participated in the study and over 600,000 miles of naturalistic driving data were collected. Sixty-five bus drivers participated in this study; 48 male and 17 female. The average age of the drivers was 49 years. A sophisticated data acquisition system (DAS) was installed on each of the 43 motorcoaches and a variety of kinematic and video data were continuously recorded. The data were analyzed by identifying safety critical events (SCEs), which included crashes, near-crashes, crash-relevant conflicts, and unintentional lane deviations. Additionally, baseline (normative driving) segments were also identified and analyzed for comparison to the SCEs. This presentation highlights the need for bus safety research and the methods used in this data collection effort. With respect to elements of roadway infrastructure, this study highlights the methods used to assess locality, roadway grade, traffic flow, and traffic density. Locality was determined by manual review of the recorded video for each event and baseline and was characterized in terms of open country, residential, business/industrial, church, playground, school, urban, airport, interstate, and other. Roadway grade was similarly determined through video review and characterized in terms of level, grade up, grade down, hillcrest, and dip. The video was also used to make a determination of the traffic flow and traffic density at the time of the event or baseline segment. For traffic flow, video was used to assess which of the following best characterized the event or baseline: not divided (2-way traffic), not divided (center 2-way left turn lane), divided (median or barrier), one-way traffic, or no lanes. In terms of traffic density, level-of-service categories were used: A1, A2, B, C, D, E, and F. Highlighted in this abstract are only a few of the many roadway elements that were coded in this study. Other elements included lighting levels, weather conditions, roadway surface conditions, relation to junction, and roadway alignment. Note that a key component of this study was to assess the impact that driver distraction and fatigue have on bus operations. In this regard, once the roadway elements had been coded, the primary research questions that were addressed were (i) “What environmental condition are associated with driver choice of engagement in tasks?”, and (ii) “what are the odds of being in a SCE while engaging in tasks while encountering these conditions?”. The study may be of interest to researchers and traffic engineers that are interested in the relationship between roadway infrastructure elements and safety events in motorcoach bus operations.

Keywords: bus safety, motorcoach, naturalistic driving, roadway infrastructure

Procedia PDF Downloads 177
8311 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 16
8310 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

Authors: Jianwei Ma, Diriba Gemechu

Abstract:

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm

Procedia PDF Downloads 203
8309 The Interventional, Prospective, Real-World Post-Marketing Clinical Follow-Up Trial of a Polycarbophil Vaginal Moisturising Gel in Women Affected by Vaginal Dryness in Late Menopausal Transition and Postmenopause: A Triple Investigation

Authors: A. Cagnacci, D. F. Barattini, E. Casolati, M. Mangrella, E. Piccolo, S. Rosu, L. C. Pătrașcu

Abstract:

This Triple study aimed to evaluate the efficacy of polycarbophil vaginal gel (PCV) in treating symptoms of vaginal atrophy (VA) in peri- and post-menopausal women. Women in peri- (n=29) and postmenopause (n=54) diagnosed with VA were progressively enrolled and treated once a day for 30 days. Thereafter, those wishing to continue (n=73) received the PCV treatment for an additional 180 days. The vaginal health index (VHI) and vaginal dryness, irritation, and pain at intercourse, along with treatment safety, were evaluated at baseline, 30 days of treatment, and after additional 180 days. At baseline, the VHI (p<0.056) and VAS of vaginal dryness (p=0.0001,) irritation (p=0.002), and pain at intercourse (p=0.0001) were worse in postmenopausal women than in perimenopausal women. VHI and VA symptoms improved in all women, and after 30 days of PCV administration, they were similar between peri-and postmenopausal women. After an additional 180 days of treatment, VHI further increased (p=0.0001), VAS of all symptoms (P=0.0001) and the Global Symptom Score (P=0.0001) further decreased. The treatment was safe. Treatment with PCV improves VA symptoms in both peri- and post-menopausal women. Prolongation of treatment up to 6 months increases the efficacy of treatment with no side effects.

Keywords: late menopausal transition, postmenopause, polycarbophil, sexuality, vaginal dryness

Procedia PDF Downloads 53
8308 Reduce the Impact of Wildfires by Identifying Them Early from Space and Sending Location Directly to Closest First Responders

Authors: Gregory Sullivan

Abstract:

The evolution of global warming has escalated the number and complexity of forest fires around the world. As an example, the United States and Brazil combined generated more than 30,000 forest fires last year. The impact to our environment, structures and individuals is incalculable. The world has learned to try to take this in stride, trying multiple ways to contain fires. Some countries are trying to use cameras in limited areas. There are discussions of using hundreds of low earth orbit satellites and linking them together, and, interfacing them through ground networks. These are all truly noble attempts to defeat the forest fire phenomenon. But there is a better, simpler answer. A bigger piece of the solutions puzzle is to see the fires while they are small, soon after initiation. The approach is to see the fires while they are very small and report their location (latitude and longitude) to local first responders. This is done by placing a sensor at geostationary orbit (GEO: 26,000 miles above the earth). By placing this small satellite in GEO, we can “stare” at the earth, and sense temperature changes. We do not “see” fires, but “measure” temperature changes. This has already been demonstrated on an experimental scale. Fires were seen at close to initiation, and info forwarded to first responders. it were the first to identify the fires 7 out of 8 times. The goal is to have a small independent satellite at GEO orbit focused only on forest fire initiation. Thus, with one small satellite, focused only on forest fire initiation, we hope to greatly decrease the impact to persons, property and the environment.

Keywords: space detection, wildfire early warning, demonstration wildfire detection and action from space, space detection to first responders

Procedia PDF Downloads 63
8307 Effect of the Food Distribution on Household Food Security Status in Iran

Authors: Delaram Ghodsi, Nasrin Omidvar, Hassan Eini-Zinab, Arash Rashidian, Hossein Raghfar

Abstract:

Food supplementary programs are policy approaches that aim to reduce financial barriers to healthy diets and tackle food insecurity. This study aimed to evaluate the effect of the supportive section of Multidisciplinary Supplementary Program for Improvement of Nutritional Status of Children (MuPINSC) on households’ food security status and nutritional status of mothers. MuPINSC is a national integrative program in Iran that distributes supplementary food basket to malnourished or growth retarded children living in low-income families in addition to providing health services, including sanitation, growth monitoring, and empowerment of families. This longitudinal study is part of a comprehensive evaluation of the program. The study participants included 359 mothers of children aged 6 to 72 month under coverage of the supportive section of the program in two provinces of Iran (Semnan and Qazvin). Demographic and economic characteristics of families were assessed by a questionnaire. Data on food security of family was collected by locally adapted Household Food Insecurity Access Scale (HFIAS) at the baseline of the study and six month thereafter. Weight and height of mothers were measured at the baseline and end of the study and mother’s BMI was calculated. Data were analysed, using paired t-test, GEE (Generalized Estimating Equation), and Chi-square tests. Based on the findings, at the baseline, only 4.7% of families were food-secure, while 13.1%, 38.7% and, 43.5% were categorized as mild, moderate and severe food insecure. After six months follow up, the distribution of different levels of food security changed significantly (P<0.001) to 7.9%, 11.6%, 42.6%, and 38%, respectively. At the end of the study, the chance of food insecurity was significantly 20% lower than the beginning (OR=0.796; 0.653-0.971). No significant difference was observed in maternal BMI based on food security (P>0.05). The findings show that the food supplementary program for children improved household food security status in the studied households. Further research is needed to assess other factors that affect the effectiveness of this large scale program on nutritional status and household’s food security.

Keywords: food security, food supplementary program, household, malnourished children

Procedia PDF Downloads 395
8306 Solving the Wireless Mesh Network Design Problem Using Genetic Algorithm and Simulated Annealing Optimization Methods

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Bahgat A. Abdullatif, Ahmed M. Rabie

Abstract:

Mesh clients, mesh routers and gateways are components of Wireless Mesh Network (WMN). In WMN, gateways connect to Internet using wireline links and supply Internet access services for users. We usually need multiple gateways, which takes time and costs a lot of money set up, due to the limited wireless channel bit rate. WMN is a highly developed technology that offers to end users a wireless broadband access. It offers a high degree of flexibility contrasted to conventional networks; however, this attribute comes at the expense of a more complex construction. Therefore, a challenge is the planning and optimization of WMNs. In this paper, we concentrate on this challenge using a genetic algorithm and simulated annealing. The genetic algorithm and simulated annealing enable searching for a low-cost WMN configuration with constraints and determine the number of used gateways. Experimental results proved that the performance of the genetic algorithm and simulated annealing in minimizing WMN network costs while satisfying quality of service. The proposed models are presented to significantly outperform the existing solutions.

Keywords: wireless mesh networks, genetic algorithms, simulated annealing, topology design

Procedia PDF Downloads 452
8305 Applying Genetic Algorithm in Exchange Rate Models Determination

Authors: Mehdi Rostamzadeh

Abstract:

Genetic Algorithms (GAs) are an adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this study, we apply GAs for fundamental and technical models of exchange rate determination in exchange rate market. In this framework, we estimated absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices (monetary models), equilibrium exchange rate and portfolio balance model as fundamental models and Auto Regressive (AR), Moving Average (MA), Auto-Regressive with Moving Average (ARMA) and Mean Reversion (MR) as technical models for Iranian Rial against European Union’s Euro using monthly data from January 1992 to December 2014. Then, we put these models into the genetic algorithm system for measuring their optimal weight for each model. These optimal weights have been measured according to four criteria i.e. R-Squared (R2), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE).Based on obtained Results, it seems that for explaining of Iranian Rial against EU Euro exchange rate behavior, fundamental models are better than technical models.

Keywords: exchange rate, genetic algorithm, fundamental models, technical models

Procedia PDF Downloads 268
8304 Signal Processing of the Blood Pressure and Characterization

Authors: Hadj Abd El Kader Benghenia, Fethi Bereksi Reguig

Abstract:

In clinical medicine, blood pressure, raised blood hemodynamic monitoring is rich pathophysiological information of cardiovascular system, of course described through factors such as: blood volume, arterial compliance and peripheral resistance. In this work, we are interested in analyzing these signals to propose a detection algorithm to delineate the different sequences and especially systolic blood pressure (SBP), diastolic blood pressure (DBP), and the wave and dicrotic to do their analysis in order to extract the cardiovascular parameters.

Keywords: blood pressure, SBP, DBP, detection algorithm

Procedia PDF Downloads 428
8303 Design and Development of Small Peptides as Anti-inflammatory Agents

Authors: Palwinder Singh

Abstract:

Beyond the conventional mode of working with anti-inflammatory agents through enzyme inhibition, herein, an alternate substrate of cyclooxygenase-2 was developed. Proline centered pentapeptide iso-conformational to arachidonic acid exhibited appreciable selectivity for COX-2 overcoming acetic acid and formalin induced pain in rats to almost 80% and was treated as a substrate by the enzyme. Remarkably, COX-2 metabolized the pentapeptide into small fragments consisting mainly of di- and tri-peptides that ensured the safe breakdown of the peptide under in-vivo conditions. The kinetic parameter Kcat/Km for COX-2 mediated metabolism of peptide 6.3 x 105 M-1 s-1 was quite similar to 9.5 x 105 M-1 s-1 for arachidonic acid. Evidenced by the dynamic molecular studies and the use of Y385F COX-2, it was observed that the breakage of the pentapeptide has probably taken place through H-bond activation of the peptide bond by the side chains of Y385 and S530.

Keywords: small peptides, anti-inflammatory agents, cyclooxygenase-2, unnatural substrates

Procedia PDF Downloads 65
8302 Fight the Burnout: Phase Two of a NICU Nurse Wellness Bundle

Authors: Megan Weisbart

Abstract:

Background/Significance: The Intensive Care Unit (ICU) environment contributes to nurse burnout. Burnout costs include decreased employee compassion, missed workdays, worse patient outcomes, diminished job performance, high turnover, and higher organizational cost. Meaningful recognition, nurturing of interpersonal connections, and mindfulness-based interventions are associated with decreased burnout. The purpose of this quality improvement project was to decrease Neonatal ICU (NICU) nurse burnout using a Wellness Bundle that fosters meaningful recognition, interpersonal connections and includes mindfulness-based interventions. Methods: The Professional Quality of Life Scale Version 5 (ProQOL5) was used to measure burnout before Wellness Bundle implementation, after six months, and will be given yearly for three years. Meaningful recognition bundle items include Online submission and posting of staff shoutouts, recognition events, Nurses Week and Unit Practice Council member gifts, and an employee recognition program. Fostering of interpersonal connections bundle items include: Monthly staff games with prizes, social events, raffle fundraisers, unit blog, unit wellness basket, and a wellness resource sheet. Quick coherence techniques were implemented at staff meetings and huddles as a mindfulness-based intervention. Findings: The mean baseline burnout score of 14 NICU nurses was 20.71 (low burnout). The baseline range was 13-28, with 11 nurses experiencing low burnout, three nurses experiencing moderate burnout, and zero nurses experiencing high burnout. After six months of the Wellness Bundle Implementation, the mean burnout score of 39 NICU nurses was 22.28 (low burnout). The range was 14-31, with 22 nurses experiencing low burnout, 17 nurses experiencing moderate burnout, and zero nurses experiencing high burnout. Conclusion: A NICU Wellness Bundle that incorporated meaningful recognition, fostering of interpersonal connections, and mindfulness-based activities was implemented to improve work environments and decrease nurse burnout. Participation bias and low baseline response rate may have affected the reliability of the data and necessitate another comparative measure of burnout in one year.

Keywords: burnout, NICU, nurse, wellness

Procedia PDF Downloads 82
8301 Residual Power Series Method for System of Volterra Integro-Differential Equations

Authors: Zuhier Altawallbeh

Abstract:

This paper investigates the approximate analytical solutions of general form of Volterra integro-differential equations system by using the residual power series method (for short RPSM). The proposed method produces the solutions in terms of convergent series requires no linearization or small perturbation and reproduces the exact solution when the solution is polynomial. Some examples are given to demonstrate the simplicity and efficiency of the proposed method. Comparisons with the Laplace decomposition algorithm verify that the new method is very effective and convenient for solving system of pantograph equations.

Keywords: integro-differential equation, pantograph equations, system of initial value problems, residual power series method

Procedia PDF Downloads 416
8300 Design Intervention to Achieve Space Efficiency for Commercial Interiors

Authors: Hari Krishna Ayyappa, Reenu Singh

Abstract:

Rising population and restricted land for development has led towards the growth of vertical buildings and small complexes. It provides many possibilities to change the shape and size of internal space in addition to the social impacts on the commercial spaces. With the increased volatility of necessities of people, the need for mental and physical comfort has continuously increased. . Living in a small space musts minimalist and space- saving cabinetwork results to sustain mortal good. This paper attempts to explore the Influence of Using Minimalist Furniture on the Efficiency of the commercial Space interiors by means of the variable resulting from preceding studies based on literature. A literature review was conducted on research articles to understand the contributing variables in a well designed small commercial spaces. A questionnaire survey was conducted to understand the layout of small commercial spaces with respect to Environmental impact, material, Design elements, Modern approach, Layered lightings, and colours. The problem of small spaces can be resolved by some ways; it's still needed for cabinetwork to develop to be more innovative to accommodate small living spaces. Since cabinetwork is a necessity and not luxury, everybody is in need of it. The spatial factors affecting overall satisfaction at a detailed position were bandied. The variable helped in proposing design ideation and mock ups to explore improved interiors. This paper concludes that most of the principles of the minimalist approach have been overlooked at, which had an impact on the space efficiency in commercial spaces like storage rooms, office area, retail stores, restaurants, and other spaces where business is conducted.

Keywords: materials, modern approach, space efficiency, tall commercial buildings

Procedia PDF Downloads 100
8299 Breast Cancer Therapy-Related Cardiac Dysfunction Identifying in Kazakhstan: Preliminary Findings of the Cohort Study

Authors: Saule Balmagambetova, Zhenisgul Tlegenova, Saule Madinova

Abstract:

Cardiotoxicity associated with anticancer treatment, now defined as cancer therapy-related cardiac dysfunction (CTRCD), accompanies cancer patients and negatively impacts their survivorship. Currently, a cardio-oncological service is being created in Kazakhstan based on the provisions of the European Society of Cardio-oncology (ESC) Guidelines. In the frames of a pilot project, a cohort study on CTRCD conditions was initiated at the Aktobe Cancer center. One hundred twenty-eight newly diagnosed breast cancer patients started on doxorubicin and/or trastuzumab were recruited. Echocardiography with global longitudinal strain (GLS) assessment, biomarkers panel (cardiac troponin (cTnI), brain natriuretic peptide (BNP), myeloperoxidase (MPO), galectin-3 (Gal-3), D-dimers, C-reactive protein (CRP)), and other tests were performed at baseline and every three months. Patients were stratified by the cardiovascular risks according to the ESC recommendations and allocated into the risk groups during the pre-treatment visit. Of them, 10 (7.8%) patients were assigned to the high-risk group, 48 (37.5%) to the medium-risk group, and 70 (54.7%) to the low-risk group, respectively. High-risk patients have been receiving their cardioprotective treatment from the outset. Patients were also divided by treatment - in the anthracycline-based 83 (64.8%), in trastuzumab- only 13 (10.2%), and in the mixed anthracycline/trastuzumab group 32 individuals (25%), respectively. Mild symptomatic CTRCD was revealed and treated in 2 (1.6%) participants, and a mild asymptomatic variant in 26 (20.5%). Mild asymptomatic conditions are defined as left ventricular ejection fraction (LVEF) ≥50% and further relative reduction in GLS by >15% from baseline and/or a further rise in cardiac biomarkers. The listed biomarkers were assessed longitudinally in repeated-measures linear regression models during 12 months of observation. The associations between changes in biomarkers and CTRCD and between changes in biomarkers and LVEF were evaluated. Analysis by risk groups revealed statistically significant differences in baseline LVEF scores (p 0.001), BNP (p 0.0075), and Gal-3 (p 0.0073). Treatment groups found no statistically significant differences at baseline. After 12 months of follow-up, only LVEF values showed a statistically significant difference by risk groups (p 0.0011). When assessing the temporal changes in the studied parameters for all treatment groups, there were statistically significant changes from visit to visit for LVEF (p 0.003); GLS (p 0.0001); BNP (p<0.00001); MPO (p<0.0001); and Gal-3 (p<0.0001). No moderate or strong correlations were found between the biomarkers values and LVEF, between biomarkers and GLS. Between the biomarkers themselves, a moderate, close to strong correlation was established between cTnI and D-dimer (r 0.65, p<0.05). The dose-dependent effect of anthracyclines has been confirmed: the summary dose has a moderate negative impact on GLS values: -r 0.31 for all treatment groups (p<0.05). The present study found myeloperoxidase as a promising biomarker of cardiac dysfunction in the mixed anthracycline/trastuzumab treatment group. The hazard of CTRCD increased by 24% (HR 1.21; 95% CI 1.01;1.73) per doubling in baseline MPO value (p 0.041). Increases in BNP were also associated with CTRCD (HR per doubling, 1.22; 95% CI 1.12;1.69). No cases of chemotherapy discontinuation due to cardiotoxic complications have been recorded. Further observations are needed to gain insight into the ability of biomarkers to predict CTRCD onset.

Keywords: breast cancer, chemotherapy, cardiotoxicity, Kazakhstan

Procedia PDF Downloads 84
8298 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 254
8297 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 508
8296 Solving Flowshop Scheduling Problems with Ant Colony Optimization Heuristic

Authors: Arshad Mehmood Ch, Riaz Ahmad, Imran Ali Ch, Waqas Durrani

Abstract:

This study deals with the application of Ant Colony Optimization (ACO) approach to solve no-wait flowshop scheduling problem (NW-FSSP). ACO algorithm so developed has been coded on Matlab computer application. The paper covers detailed steps to apply ACO and focuses on judging the strength of ACO in relation to other solution techniques previously applied to solve no-wait flowshop problem. The general purpose approach was able to find reasonably accurate solutions for almost all the problems under consideration and was able to handle a fairly large spectrum of problems with far reduced CPU effort. Careful scrutiny of the results reveals that the algorithm presented results better than other approaches like Genetic algorithm and Tabu Search heuristics etc; earlier applied to solve NW-FSSP data sets.

Keywords: no-wait, flowshop, scheduling, ant colony optimization (ACO), makespan

Procedia PDF Downloads 427
8295 Taguchi Method for Analyzing a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.

Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method

Procedia PDF Downloads 184
8294 Factorization of Computations in Bayesian Networks: Interpretation of Factors

Authors: Linda Smail, Zineb Azouz

Abstract:

Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks.

Keywords: Bayesian networks, D-Separation, level two Bayesian networks, factorization of computation

Procedia PDF Downloads 523
8293 An Android Application for ECG Monitoring and Evaluation Using Pan-Tompkins Algorithm

Authors: Cebrail Çiflikli, Emre Öner Tartan

Abstract:

Parallel to the fast worldwide increase of elderly population and spreading unhealthy life habits, there is a significant rise in the number of patients and health problems. The supervision of people who have health problems and oversight in detection of people who have potential risks, bring a considerable cost to health system and increase workload of physician. To provide an efficient solution to this problem, in the recent years mobile applications have shown their potential for wide usage in health monitoring. In this paper we present an Android mobile application that records and evaluates ECG signal using Pan-Tompkins algorithm for QRS detection. The application model includes an alarm mechanism that is proposed to be used for sending message including abnormality information and location information to health supervisor.

Keywords: Android mobile application, ECG monitoring, QRS detection, Pan-Tompkins Algorithm

Procedia PDF Downloads 227
8292 The Moment of the Optimal Average Length of the Multivariate Exponentially Weighted Moving Average Control Chart for Equally Correlated Variables

Authors: Edokpa Idemudia Waziri, Salisu S. Umar

Abstract:

The Hotellng’s T^2 is a well-known statistic for detecting a shift in the mean vector of a multivariate normal distribution. Control charts based on T have been widely used in statistical process control for monitoring a multivariate process. Although it is a powerful tool, the T statistic is deficient when the shift to be detected in the mean vector of a multivariate process is small and consistent. The Multivariate Exponentially Weighted Moving Average (MEWMA) control chart is one of the control statistics used to overcome the drawback of the Hotellng’s T statistic. In this paper, the probability distribution of the Average Run Length (ARL) of the MEWMA control chart when the quality characteristics exhibit substantial cross correlation and when the process is in-control and out-of-control was derived using the Markov Chain algorithm. The derivation of the probability functions and the moments of the run length distribution were also obtained and they were consistent with some existing results for the in-control and out-of-control situation. By simulation process, the procedure identified a class of ARL for the MEWMA control when the process is in-control and out-of-control. From our study, it was observed that the MEWMA scheme is quite adequate for detecting a small shift and a good way to improve the quality of goods and services in a multivariate situation. It was also observed that as the in-control average run length ARL0¬ or the number of variables (p) increases, the optimum value of the ARL0pt increases asymptotically and as the magnitude of the shift σ increases, the optimal ARLopt decreases. Finally, we use the example from the literature to illustrate our method and demonstrate its efficiency.

Keywords: average run length, markov chain, multivariate exponentially weighted moving average, optimal smoothing parameter

Procedia PDF Downloads 416
8291 Evaluation of a Hybrid System for Renewable Energy in a Small Island in Greece

Authors: M. Bertsiou, E. Feloni, E. Baltas

Abstract:

The proper management of the water supply and electricity is the key issue, especially in small islands, where sustainability has been combined with the autonomy and covering of water needs and the fast development in potential sectors of economy. In this research work a hybrid system in Fournoi island (Icaria), a small island of Aegean, has been evaluated in order to produce hydropower and cover water demands, as it can provide solutions to acute problems, such as the water scarcity or the instability of local power grids. The meaning and the utility of hybrid system and the cooperation with a desalination plant has also been considered. This kind of project has not yet been widely applied, so the consideration will give us valuable information about the storage of water and the controlled distribution of the generated clean energy. This process leads to the conclusions about the functioning of the system and the profitability of this project, covering the demand for water and electricity.

Keywords: hybrid system, water, electricity, island

Procedia PDF Downloads 318
8290 Improved Multi-Channel Separation Algorithm for Satellite-Based Automatic Identification System Signals Based on Artificial Bee Colony and Adaptive Moment Estimation

Authors: Peng Li, Luan Wang, Haifeng Fei, Renhong Xie, Yibin Rui, Shanhong Guo

Abstract:

The applications of satellite-based automatic identification system (S-AIS) pave the road for wide-range maritime traffic monitoring and management. But the coverage of satellite’s view includes multiple AIS self-organizing networks, which leads to the collision of AIS signals from different cells. The contribution of this work is to propose an improved multi-channel blind source separation algorithm based on Artificial Bee Colony (ABC) and advanced stochastic optimization to perform separation of the mixed AIS signals. The proposed approach adopts modified ABC algorithm to get an optimized initial separating matrix, which can expedite the initialization bias correction, and utilizes the Adaptive Moment Estimation (Adam) to update the separating matrix by adjusting the learning rate for each parameter dynamically. Simulation results show that the algorithm can speed up convergence and lead to better performance in separation accuracy.

Keywords: satellite-based automatic identification system, blind source separation, artificial bee colony, adaptive moment estimation

Procedia PDF Downloads 181
8289 An Experimental Investigation of the Effect of Control Algorithm on the Energy Consumption and Temperature Distribution of a Household Refrigerator

Authors: G. Peker, Tolga N. Aynur, E. Tinar

Abstract:

In order to determine the energy consumption level and cooling characteristics of a domestic refrigerator controlled with various cooling system algorithms, a side by side type (SBS) refrigerator was tested in temperature and humidity controlled chamber conditions. Two different control algorithms; so-called drop-in and frequency controlled variable capacity compressor algorithms, were tested on the same refrigerator. Refrigerator cooling characteristics were investigated for both cases and results were compared with each other. The most important comparison parameters between the two algorithms were taken as; temperature distribution, energy consumption, evaporation and condensation temperatures, and refrigerator run times. Standard energy consumption tests were carried out on the same appliance and resulted in almost the same energy consumption levels, with a difference of %1,5. By using these two different control algorithms, the power consumptions character/profile of the refrigerator was found to be similar. By following the associated energy measurement standard, the temperature values of the test packages were measured to be slightly higher for the frequency controlled algorithm compared to the drop-in algorithm. This paper contains the details of this experimental study conducted with different cooling control algorithms and compares the findings based on the same standard conditions.

Keywords: control algorithm, cooling, energy consumption, refrigerator

Procedia PDF Downloads 366
8288 Study of the Effect of Inclusion of TiO2 in Active Flux on Submerged Arc Welding of Low Carbon Mild Steel Plate and Parametric Optimization of the Process by Using DEA Based Bat Algorithm

Authors: Sheetal Kumar Parwar, J. Deb Barma, A. Majumder

Abstract:

Submerged arc welding is a very complex process. It is a very efficient and high performance welding process. In this present study an attempt have been done to reduce the welding distortion by increased amount of oxide flux through TiO2 in submerged arc welding process. Care has been taken to avoid the excessiveness of the adding agent for attainment of significant results. Data Envelopment Analysis (DEA) based BAT algorithm is used for the parametric optimization purpose in which DEA Data Envelopment Analysis is used to convert multi response parameters into a single response parameter. The present study also helps to know the effectiveness of the addition of TiO2 in active flux during submerged arc welding process.

Keywords: BAT algorithm, design of experiment, optimization, submerged arc welding

Procedia PDF Downloads 630
8287 Classification of Small Towns: Three Methodological Approaches and Their Results

Authors: Jerzy Banski

Abstract:

Small towns represent a key element of settlement structure and serve a number of important functions associated with the servicing of rural areas that surround them. It is in light of this that scientific studies have paid considerable attention to the functional structure of centers of this kind, as well as the relationships with both surrounding rural areas and other urban centers. But a preliminary to such research has typically involved attempts at classifying the urban centers themselves, with this also assisting with the planning and shaping of development policy on different spatial scales. The purpose of the work is to test out the methods underpinning three different classifications of small urban centers, as well as to offer a preliminary interpretation of the outcomes obtained. Research took in 722 settlement units in Poland, granted town rights and populated by fewer than 20,000 inhabitants. A morphologically-based classification making reference to the database of topographic objects as regards land cover within the administrative boundaries of towns and cities was carried out, and it proved possible to distinguish the categories of “housing-estate”, industrial and R&R towns, as well as towns characterized by dichotomy. Equally, a functional/morphological approach taken with the same database allowed for the identification – via an alternative method – of three main categories of small towns (i.e., the monofunctional, multifunctional or oligo functional), which could then be described in far greater detail. A third, multi-criterion classification made simultaneous reference to the conditioning of a structural, a location-related, and an administrative hierarchy-related nature, allowing for distinctions to be drawn between small towns in 9 different categories. The results obtained allow for multifaceted analysis and interpretation of the geographical differentiation characterizing the distribution of Poland’s urban centers across space in the country.

Keywords: small towns, classification, local planning, Poland

Procedia PDF Downloads 79
8286 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 189
8285 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method

Authors: Sandeep Santosh, O. P. Sahu

Abstract:

We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.

Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC

Procedia PDF Downloads 461