Search results for: modified simplex algorithm
3825 Algorithms Minimizing Total Tardiness
Authors: Harun Aydilek, Asiye Aydilek, Ali Allahverdi
Abstract:
The total tardiness is a widely used performance measure in the scheduling literature. This performance measure is particularly important in situations where there is a cost to complete a job beyond its due date. The cost of scheduling increases as the gap between a job's due date and its completion time increases. Such costs may also be penalty costs in contracts, loss of goodwill. This performance measure is important as the fulfillment of due dates of customers has to be taken into account while making scheduling decisions. The problem is addressed in the literature, however, it has been assumed zero setup times. Even though this assumption may be valid for some environments, it is not valid for some other scheduling environments. When setup times are treated as separate from processing times, it is possible to increase machine utilization and to reduce total tardiness. Therefore, non-zero setup times need to be considered as separate. A dominance relation is developed and several algorithms are proposed. The developed dominance relation is utilized in the proposed algorithms. Extensive computational experiments are conducted for the evaluation of the algorithms. The experiments indicated that the developed algorithms perform much better than the existing algorithms in the literature. More specifically, one of the newly proposed algorithms reduces the error of the best existing algorithm in the literature by 40 percent.Keywords: algorithm, assembly flowshop, dominance relation, total tardiness
Procedia PDF Downloads 3553824 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction
Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé
Abstract:
One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.Keywords: input variable disposition, machine learning, optimization, performance, time series prediction
Procedia PDF Downloads 1113823 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model
Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis
Abstract:
Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry
Procedia PDF Downloads 2263822 GPU Accelerated Fractal Image Compression for Medical Imaging in Parallel Computing Platform
Authors: Md. Enamul Haque, Abdullah Al Kaisan, Mahmudur R. Saniat, Aminur Rahman
Abstract:
In this paper, we have implemented both sequential and parallel version of fractal image compression algorithms using CUDA (Compute Unified Device Architecture) programming model for parallelizing the program in Graphics Processing Unit for medical images, as they are highly similar within the image itself. There is several improvements in the implementation of the algorithm as well. Fractal image compression is based on the self similarity of an image, meaning an image having similarity in majority of the regions. We take this opportunity to implement the compression algorithm and monitor the effect of it using both parallel and sequential implementation. Fractal compression has the property of high compression rate and the dimensionless scheme. Compression scheme for fractal image is of two kinds, one is encoding and another is decoding. Encoding is very much computational expensive. On the other hand decoding is less computational. The application of fractal compression to medical images would allow obtaining much higher compression ratios. While the fractal magnification an inseparable feature of the fractal compression would be very useful in presenting the reconstructed image in a highly readable form. However, like all irreversible methods, the fractal compression is connected with the problem of information loss, which is especially troublesome in the medical imaging. A very time consuming encoding process, which can last even several hours, is another bothersome drawback of the fractal compression.Keywords: accelerated GPU, CUDA, parallel computing, fractal image compression
Procedia PDF Downloads 3363821 Synthesis, Characterization, and Catalytic Application of Modified Hierarchical Zeolites
Authors: A. Feliczak Guzik, I. Nowak
Abstract:
Zeolites, classified as microporous materials, are a large group of crystalline aluminosilicate materials commonly used in the chemical industry. These materials are characterized by large specific surface area, high adsorption capacity, hydrothermal and thermal stability. However, the micropores present in them impose strong mass transfer limitations, resulting in low catalytic performance. Consequently, mesoporous (hierarchical) zeolites have attracted considerable attention from researchers. These materials possess additional porosity in the mesopore size region (2-50 nm according to IUPAC). Mesoporous zeolites, based on commercial MFI-type zeolites modified with silver, were synthesized as follows: 0.5 g of zeolite was dispersed in a mixture containing CTABr (template), water, ethanol, and ammonia under ultrasound for 30 min at 65°C. The silicon source, which was tetraethyl orthosilicate, was then added and stirred for 4 h. After this time, silver(I) nitrate was added. In a further step, the whole mixture was filtered and washed with water: ethanol mixture. The template was removed by calcination at 550°C for 5h. All the materials obtained were characterized by the following techniques: X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), nitrogen adsorption/desorption isotherms, FTIR spectroscopy. X-ray diffraction and low-temperature nitrogen adsorption/desorption isotherms revealed additional secondary porosity. Moreover, the structure of the commercial zeolite was preserved during most of the material syntheses. The aforementioned materials were used in the epoxidation reaction of cyclohexene using conventional heating and microwave radiation heating. The composition of the reaction mixture was analyzed every 1 h by gas chromatography. As a result, about 60% conversion of cyclohexene and high selectivity to the desired reaction products i.e., 1,2-epoxy cyclohexane and 1,2-cyclohexane diol, were obtained.Keywords: catalytic application, characterization, epoxidation, hierarchical zeolites, synthesis
Procedia PDF Downloads 893820 Heuristic Classification of Hydrophone Recordings
Authors: Daniel M. Wolff, Patricia Gray, Rafael de la Parra Venegas
Abstract:
An unsupervised machine listening system is constructed and applied to a dataset of 17,195 30-second marine hydrophone recordings. The system is then heuristically supplemented with anecdotal listening, contextual recording information, and supervised learning techniques to reduce the number of false positives. Features for classification are assembled by extracting the following data from each of the audio files: the spectral centroid, root-mean-squared values for each frequency band of a 10-octave filter bank, and mel-frequency cepstral coefficients in 5-second frames. In this way both time- and frequency-domain information are contained in the features to be passed to a clustering algorithm. Classification is performed using the k-means algorithm and then a k-nearest neighbors search. Different values of k are experimented with, in addition to different combinations of the available feature sets. Hypothesized class labels are 'primarily anthrophony' and 'primarily biophony', where the best class result conforming to the former label has 104 members after heuristic pruning. This demonstrates how a large audio dataset has been made more tractable with machine learning techniques, forming the foundation of a framework designed to acoustically monitor and gauge biological and anthropogenic activity in a marine environment.Keywords: anthrophony, hydrophone, k-means, machine learning
Procedia PDF Downloads 1703819 Creation of Computerized Benchmarks to Facilitate Preparedness for Biological Events
Abstract:
Introduction: Communicable diseases and pandemics pose a growing threat to the well-being of the global population. A vital component of protecting the public health is the creation and sustenance of a continuous preparedness for such hazards. A joint Israeli-German task force was deployed in order to develop an advanced tool for self-evaluation of emergency preparedness for variable types of biological threats. Methods: Based on a comprehensive literature review and interviews with leading content experts, an evaluation tool was developed based on quantitative and qualitative parameters and indicators. A modified Delphi process was used to achieve consensus among over 225 experts from both Germany and Israel concerning items to be included in the evaluation tool. Validity and applicability of the tool for medical institutions was examined in a series of simulation and field exercises. Results: Over 115 German and Israeli experts reviewed and examined the proposed parameters as part of the modified Delphi cycles. A consensus of over 75% of experts was attained for 183 out of 188 items. The relative importance of each parameter was rated as part of the Delphi process, in order to define its impact on the overall emergency preparedness. The parameters were integrated in computerized web-based software that enables to calculate scores of emergency preparedness for biological events. Conclusions: The parameters developed in the joint German-Israeli project serve as benchmarks that delineate actions to be implemented in order to create and maintain an ongoing preparedness for biological events. The computerized evaluation tool enables to continuously monitor the level of readiness and thus strengths and gaps can be identified and corrected appropriately. Adoption of such a tool is recommended as an integral component of quality assurance of public health and safety.Keywords: biological events, emergency preparedness, bioterrorism, natural biological events
Procedia PDF Downloads 4253818 Design of Low Latency Multiport Network Router on Chip
Authors: P. G. Kaviya, B. Muthupandian, R. Ganesan
Abstract:
On-chip routers typically have buffers are used input or output ports for temporarily storing packets. The buffers are consuming some router area and power. The multiple queues in parallel as in VC router. While running a traffic trace, not all input ports have incoming packets needed to be transferred. Therefore large numbers of queues are empty and others are busy in the network. So the time consumption should be high for the high traffic. Therefore using a RoShaQ, minimize the buffer area and time The RoShaQ architecture was send the input packets are travel through the shared queues at low traffic. At high load traffic the input packets are bypasses the shared queues. So the power and area consumption was reduced. A parallel cross bar architecture is proposed in this project in order to reduce the power consumption. Also a new adaptive weighted routing algorithm for 8-port router architecture is proposed in order to decrease the delay of the network on chip router. The proposed system is simulated using Modelsim and synthesized using Xilinx Project Navigator.Keywords: buffer, RoShaQ architecture, shared queue, VC router, weighted routing algorithm
Procedia PDF Downloads 5423817 Locomotion Effects of Redundant Degrees of Freedom in Multi-Legged Quadruped Robots
Authors: Hossein Keshavarz, Alejandro Ramirez-Serrano
Abstract:
Energy efficiency and locomotion speed are two key parameters for legged robots; thus, finding ways to improve them are important. This paper proposes a locomotion framework to analyze the energy usage and speed of quadruped robots via a Genetic Algorithm (GA) optimization process. For this, a quadruped robot platform with joint redundancy in its hind legs that we believe will help multi-legged robots improve their speed and energy consumption is used. ContinuO, the quadruped robot of interest, has 14 active degrees of freedom (DoFs), including three DoFs for each front leg, and unlike previously developed quadruped robots, four DoFs for each hind leg. ContinuO aims to realize a cost-effective quadruped robot for real-world scenarios with high speeds and the ability to overcome large obstructions. The proposed framework is used to locomote the robot and analyze its energy consumed at diverse stride lengths and locomotion speeds. The analysis is performed by comparing the obtained results in two modes, with and without the joint redundancy on the robot’s hind legs.Keywords: genetic algorithm optimization, locomotion path planning, quadruped robots, redundant legs
Procedia PDF Downloads 1103816 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 1733815 On the Influence of the Metric Space in the Critical Behavior of Magnetic Temperature
Authors: J. C. Riaño-Rojas, J. D. Alzate-Cardona, E. Restrepo-Parra
Abstract:
In this work, a study of generic magnetic nanoparticles varying the metric space is presented. As the metric space is changed, the nanoparticle form and the inner product are also varied, since the energetic scale is not conserved. This study is carried out using Monte Carlo simulations combined with the Wolff embedding and Metropolis algorithms. The Metropolis algorithm is used at high temperature regions to reach the equilibrium quickly. The Wolff embedding algorithm is used at low and critical temperature regions in order to reduce the critical slowing down phenomenon. The ions number is kept constant for the different forms and the critical temperatures using finite size scaling are found. We observed that critical temperatures don't exhibit significant changes when the metric space was varied. Additionally, the effective dimension according the metric space was determined. A study of static behavior for reaching the static critical exponents was developed. The objective of this work is to observe the behavior of the thermodynamic quantities as energy, magnetization, specific heat, susceptibility and Binder's cumulants at the critical region, in order to demonstrate if the magnetic nanoparticles describe their magnetic interactions in the Euclidean space or if there is any correspondence in other metric spaces.Keywords: nanoparticles, metric, Monte Carlo, critical behaviour
Procedia PDF Downloads 5163814 Energy Efficient Clustering with Adaptive Particle Swarm Optimization
Authors: KumarShashvat, ArshpreetKaur, RajeshKumar, Raman Chadha
Abstract:
Wireless sensor networks have principal characteristic of having restricted energy and with limitation that energy of the nodes cannot be replenished. To increase the lifetime in this scenario WSN route for data transmission is opted such that utilization of energy along the selected route is negligible. For this energy efficient network, dandy infrastructure is needed because it impinges the network lifespan. Clustering is a technique in which nodes are grouped into disjoints and non–overlapping sets. In this technique data is collected at the cluster head. In this paper, Adaptive-PSO algorithm is proposed which forms energy aware clusters by minimizing the cost of locating the cluster head. The main concern is of the suitability of the swarms by adjusting the learning parameters of PSO. Particle Swarm Optimization converges quickly at the beginning stage of the search but during the course of time, it becomes stable and may be trapped in local optima. In suggested network model swarms are given the intelligence of the spiders which makes them capable enough to avoid earlier convergence and also help them to escape from the local optima. Comparison analysis with traditional PSO shows that new algorithm considerably enhances the performance where multi-dimensional functions are taken into consideration.Keywords: Particle Swarm Optimization, adaptive – PSO, comparison between PSO and A-PSO, energy efficient clustering
Procedia PDF Downloads 2493813 Kaolinite-Assisted Microencapsulation of Octodecane for Thermal Energy Storage
Authors: Ting Pan, Jiacheng Wang, Pengcheng Lin, Ying Chen, Songping Mo
Abstract:
Phase change materials (PCMs) are widely used in latent heat thermal energy storage because of their good properties such as high energy storage density and constant heat-storage/release temperature. Microencapsulation techniques can prevent PCMs from leaking during the liquid-solid phase transition and enhance thermal properties. This technique has been widely applied in architectural materials, thermo-regulated textiles, aerospace fields, etc. One of the most important processes during the synthesis of microcapsules is to form a stable emulsion of the PCM core and reactant solution for the formation of the shell of the microcapsules. The use of surfactants is usually necessary for the formation of a stable emulsion system because of the difference in hydrophilia/lipophilicity of the PCM and the solvent. Unfortunately, the use of surfactants may cause pollution to the environment. In this study, modified kaolinite was used as an emulsion stabilizer for the microencapsulation of octodecane as PCM. Microcapsules were synthesized by phase inversion emulsification method, and the shell of polymethyl methacrylate (PMMA) was formed through free radical polymerization. The morphologies, crystalloid phase, and crystallization properties of microcapsules were investigated using scanning electron microscopy (SEM), X-ray diffractometer (XRD), and Fourier transforms infrared spectrometer (FTIR). The thermal properties and thermal stability were investigated by a differential scanning calorimeter (DSC) and a thermogravimetric analyzer (TG). The FT-IR, XRD results showed that the octodecane was well encapsulated in the PMMA shell. The SEM results showed that the microcapsules were spheres with an average size of about 50-100nm. The DSC results indicated that the latent heat of the microcapsules was 152.64kJ/kg and 164.23kJ/kg. The TG results confirmed that the microcapsules had good thermal stability due to the PMMA shell. Based on the results, it can be concluded that the modified kaolinite can be used as an emulsifier for the synthesis of PCM microcapsules, which is valid for reducing part of the possible pollution caused by the utilization of surfactants.Keywords: kaolinite, microencapsulation, PCM, thermal energy storage
Procedia PDF Downloads 1333812 Multi Objective Optimization for Two-Sided Assembly Line Balancing
Authors: Srushti Bhatt, M. B. Kiran
Abstract:
Two-sided assembly line balancing problem is yet to be addressed simply to compete for the global market for manufacturers. The task assigned in an ordered sequence to get optimum performance of the system is known as assembly line balancing problem mainly classified as single and two sided. It is very challenging in manufacturing industries to balance two-sided assembly line, wherein the set of sequential workstations the task operations are performed in two sides of the line. The conflicting major objective in two-sided assembly line balancing problem is either to maximize /minimize the performance parameters. The present study emphases on combining different evolutionary algorithm; ant colony, Tabu search and petri net method; and compares their results of an algorithm for solving two-sided assembly line balancing problem. The concept of multi objective optimization of performance parameters is now a day adopted to make a decision involving more than one objective function to be simultaneously optimized. The optimum result can be expected among the selected methods using multi-objective optimization. The performance parameters considered in the present study are a number of workstation, slickness and smoothness index. The simulation of the assembly line balancing problem provides optimal results of classical and practical problems.Keywords: Ant colony, petri net, tabu search, two sided ALBP
Procedia PDF Downloads 2783811 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng
Abstract:
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea
Procedia PDF Downloads 1743810 Simulation-Based Optimization of a Non-Uniform Piezoelectric Energy Harvester with Stack Boundary
Authors: Alireza Keshmiri, Shahriar Bagheri, Nan Wu
Abstract:
This research presents an analytical model for the development of an energy harvester with piezoelectric rings stacked at the boundary of the structure based on the Adomian decomposition method. The model is applied to geometrically non-uniform beams to derive the steady-state dynamic response of the structure subjected to base motion excitation and efficiently harvest the subsequent vibrational energy. The in-plane polarization of the piezoelectric rings is employed to enhance the electrical power output. A parametric study for the proposed energy harvester with various design parameters is done to prepare the dataset required for optimization. Finally, simulation-based optimization technique helps to find the optimum structural design with maximum efficiency. To solve the optimization problem, an artificial neural network is first trained to replace the simulation model, and then, a genetic algorithm is employed to find the optimized design variables. Higher geometrical non-uniformity and length of the beam lowers the structure natural frequency and generates a larger power output.Keywords: piezoelectricity, energy harvesting, simulation-based optimization, artificial neural network, genetic algorithm
Procedia PDF Downloads 1243809 Data Transformations in Data Envelopment Analysis
Authors: Mansour Mohammadpour
Abstract:
Data transformation refers to the modification of any point in a data set by a mathematical function. When applying transformations, the measurement scale of the data is modified. Data transformations are commonly employed to turn data into the appropriate form, which can serve various functions in the quantitative analysis of the data. This study addresses the investigation of the use of data transformations in Data Envelopment Analysis (DEA). Although data transformations are important options for analysis, they do fundamentally alter the nature of the variable, making the interpretation of the results somewhat more complex.Keywords: data transformation, data envelopment analysis, undesirable data, negative data
Procedia PDF Downloads 263808 Design Approach to Incorporate Unique Performance Characteristics of Special Concrete
Authors: Devendra Kumar Pandey, Debabrata Chakraborty
Abstract:
The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.Keywords: high performance concrete, special concrete, structural design, structural lightweight concrete
Procedia PDF Downloads 3053807 Detection of Cryptosporidium Oocysts by Acid-Fast Staining Method and PCR in Surface Water from Tehran, Iran
Authors: Mohamad Mohsen Homayouni, Niloofar Taghipour, Ahmad Reza Memar, Niloofar Khalaji, Hamed Kiani, Seyyed Javad Seyyed Tabaei
Abstract:
Background and Objective: Cryptosporidium is a coccidian protozoan parasite; its oocysts in surface water are a global health problem. Due to the low number of parasites in the water resources and the lack of laboratory culture, rapid and sensitive method for detection of the organism in the water resources is necessarily required. We applied modified acid-fast staining and PCR for the detection of the Cryptosporidium spp. and analysed the genotypes in 55 samples collected from surface water. Methods: Over a period of nine months, 55 surface water samples were collected from the five rivers in Tehran, Iran. The samples were filtered by using cellulose acetate membrane filters. By acid fast method, initial identification of Cryptosporidium oocyst were carried out on surface water samples. Then, nested PCR assay was designed for the specific amplification and analysed the genotypes. Results: Modified Ziehl-Neelsen method revealed 5–20 Cryptosporidium oocysts detected per 10 Liter. Five out of the 55 (9.09%) surface water samples were found positive for Cryptosporidium spp. by Ziehl-Neelsen test and seven (12.7%) were found positive by nested PCR. The staining results were consistent with PCR. Seven Cryptosporidium PCR products were successfully sequenced and five gp60 subtypes were detected. Our finding of gp60 gene revealed that all of the positive isolates were Cryptosporidium parvum and belonged to subtype families IIa and IId. Conclusion: Our investigations were showed that collection of water samples were contaminated by Cryptosporidium, with potential hazards for the significant health problem. This study provides the first report on detection and genotyping of Cryptosporidium species from surface water samples in Iran, and its result confirmed the low clinical incidence of this parasite on the community.Keywords: Cryptosporidium spp., membrane filtration, subtype, surface water, Iran
Procedia PDF Downloads 4173806 Isolation Preserving Medical Conclusion Hold Structure via C5 Algorithm
Authors: Swati Kishor Zode, Rahul Ambekar
Abstract:
Data mining is the extraction of fascinating examples on the other hand information from enormous measure of information and choice is made as indicated by the applicable information extracted. As of late, with the dangerous advancement in internet, stockpiling of information and handling procedures, privacy preservation has been one of the major (higher) concerns in data mining. Various techniques and methods have been produced for protection saving data mining. In the situation of Clinical Decision Support System, the choice is to be made on the premise of the data separated from the remote servers by means of Internet to diagnose the patient. In this paper, the fundamental thought is to build the precision of Decision Support System for multiple diseases for different maladies and in addition protect persistent information while correspondence between Clinician side (Client side) also, the Server side. A privacy preserving protocol for clinical decision support network is proposed so that patients information dependably stay scrambled amid diagnose prepare by looking after the accuracy. To enhance the precision of Decision Support System for various malady C5.0 classifiers and to save security, a Homomorphism encryption algorithm Paillier cryptosystem is being utilized.Keywords: classification, homomorphic encryption, clinical decision support, privacy
Procedia PDF Downloads 3313805 Hydrophobically Modified Glycol Chitosan Nanoparticles as a Carrier for Etoposide
Authors: Akhtar Aman, Abida Raza, Shumaila Bashir, Javaid Irfan, Andreas G. Schätzlein, Ijeoma F Uchegbeu
Abstract:
Development of efficient delivery system for hydrophobic drugs remains a major concern in chemotherapy. The objective of the current study was to develop polymeric drug-delivery system for etoposide from amphiphilic derivatives of glycol chitosan, capable to improve the pharmacokinetics and to reduce the adverse effects of etoposide due to various organic solvents used in commercial formulations for solubilisation of etoposide. As a promising carrier, amphiphilic derivatives of glycol chitosan were synthesized by chemical grafting of palmitic acid N-hydroxy succinimide and quaternisation to glycol chitosan backbone. To this end a 7.9 kDa glycol chitosan was modified by palmitoylation and quaternisation into 13 kDa. Nano sized micelles prepared from this amphiphilic polymer had the capability to encapsulate up to 3 mg/ml etoposide. The pharmacokinetic results indicated that GCPQ based etoposide formulation transformed the biodistribution pattern. AUC 0.5-24 hr showed statistically significant difference in ETP-GCPQ vs. commercial preparation in liver (25 vs 70, p<0.001), spleen (27 vs. 36, P<0.05), lungs (42 vs. 136, p<0.001), kidneys (25 vs. 30, p<0.05) and brain (19 vs. 9,p<0.001). Using the hydrophobic fluorescent dye Nile red, we showed that micelles efficiently delivered their payload to MCF7 and A2780 cancer cells in-vitro and to A431 xenograft tumor in-vivo, suggesting these systems could deliver hydrophobic anti- cancer drugs such as etoposide to tumors. The pharmacokinetic results indicated that the GCPQ micelles transformed the biodistribution pattern and increased etoposide concentration in the brain significantly compared to free drug after intravenous administration. GCPQ based formulations not only reduced side effects associated with current available formulations but also increased their transport through the biological barriers, thus making it a good delivery system.Keywords: glycol chitosan, Nile red, micelles, etoposide, A431 xenografts
Procedia PDF Downloads 3143804 Resource-Constrained Assembly Line Balancing Problems with Multi-Manned Workstations
Authors: Yin-Yann Chen, Jia-Ying Li
Abstract:
Assembly line balancing problems can be categorized into one-sided, two-sided, and multi-manned ones by using the number of operators deployed at workstations. This study explores the balancing problem of a resource-constrained assembly line with multi-manned workstations. Resources include machines or tools in assembly lines such as jigs, fixtures, and hand tools. A mathematical programming model was developed to carry out decision-making and planning in order to minimize the numbers of workstations, resources, and operators for achieving optimal production efficiency. To improve the solution-finding efficiency, a genetic algorithm (GA) and a simulated annealing algorithm (SA) were designed and developed in this study to be combined with a practical case in car making. Results of the GA/SA and mathematics programming were compared to verify their validity. Finally, analysis and comparison were conducted in terms of the target values, production efficiency, and deployment combinations provided by the algorithms in order for the results of this study to provide references for decision-making on production deployment.Keywords: heuristic algorithms, line balancing, multi-manned workstation, resource-constrained
Procedia PDF Downloads 2103803 A Monopole Intravascular Antenna with Three Parasitic Elements Optimized for Higher Tesla MRI Systems
Authors: Mohammad Mohammadzadeh, Alireza Ghasempour
Abstract:
In this paper, a new design of monopole antenna has been proposed that increases the contrast of intravascular magnetic resonance images through increasing the homogeneity of the intrinsic signal-to-noise ratio (ISNR) distribution around the antenna. The antenna is made of a coaxial cable with three parasitic elements. Lengths and positions of the elements are optimized by the improved genetic algorithm (IGA) for 1.5, 3, 4.7, and 7Tesla MRI systems based on a defined cost function. Simulations were also conducted to verify the performance of the designed antenna. Our simulation results show that each time IGA is executed different values for the parasitic elements are obtained so that the cost functions of those antennas are high. According to the obtained results, IGA can also find the best values for the parasitic elements (regarding cost function) in the next executions. Additionally, two dimensional and one-dimensional maps of ISNR were drawn for the proposed antenna and compared to the previously published monopole antenna with one parasitic element at the frequency of 64MHz inside a saline phantom. Results verified that in spite of ISNR decreasing, there is a considerable improvement in the homogeneity of ISNR distribution of the proposed antenna so that their multiplication increases.Keywords: intravascular MR antenna, monopole antenna, parasitic elements, signal-to-noise ratio (SNR), genetic algorithm
Procedia PDF Downloads 3013802 Gold-Mediated Modification of Apoferritin Surface with Targeting Antibodies
Authors: Simona Dostalova, Pavel Kopel, Marketa Vaculovicova, Vojtech Adam, Rene Kizek
Abstract:
Protein apoferritin seems to be a very promising structure for use as a nanocarrier. It is prepared from intracellular ferritin protein naturally found in most organisms. The role of ferritin proteins is to store and transport ferrous ions. Apoferritin is a hollow protein cage without ferrous ions that can be prepared from ferritin by reduction with thioglycolic acid or dithionite. The structure of apoferritin is composed of 24 protein subunits, creating a sphere with 12 nm in diameter. The inner cavity has a diameter of 8 nm. The drug encapsulation process is based on the response of apoferritin structure to the pH changes of surrounding solution. In low pH, apoferritin is disassembled into individual subunits and its structure is “opened”. It can then be mixed with any desired cytotoxic drug and after adjustment of pH back to neutral the subunits are reconnected again and the drug is encapsulated within the apoferritin particles. Excess drug molecules can be removed by dialysis. The receptors for apoferritin, SCARA5 and TfR1 can be found in the membrane of both healthy and cancer cells. To enhance the specific targeting of apoferritin nanocarrier, it is possible to modify its surface with targeting moieties, such as antibodies. To ensure sterically correct complex, we used a a peptide linker based on a protein G with N-terminus affinity towards Fc region of antibodies. To connect the peptide to the surface of apoferritin, the C-terminus of peptide was made of cysteine with affinity to gold. The surface of apoferritin with encapsulated doxorubicin (ApoDox) was coated either with gold nanoparticles (ApoDox-Nano) or gold (III) chloride hydrate reduced with sodium borohydride (ApoDox-HAu). The applied amount of gold in form of gold (III) chloride hydrate was 10 times higher than in the case of gold nanoparticles. However, after removal of the excess unbound ions by electrophoretic separation, the concentration of gold on the surface of apoferritin was only 6 times higher for ApoDox-HAu in comparison with ApoDox-Nano. Moreover, the reduction with sodium borohydride caused a loss of doxorubicin fluorescent properties (excitation maximum at 480 nm with emission maximum at 600 nm) and thus its biological activity. Fluorescent properties of ApoDox-Nano were similar to the unmodified ApoDox, therefore it was more suited for the intended use. To evaluate the specificity of apoferritin modified with antibodies, we used ELISA-like method with the surface of microtitration plate wells coated by the antigen (goat anti-human IgG antibodies). To these wells, we applied ApoDox without targeting antibodies and ApoDox-Nano modified with targeting antibodies (human IgG antibodies). The amount of unmodified ApoDox on antigen after incubation and subsequent rinsing with water was 5 times lower than in the case of ApoDox-Nano modified with targeting antibodies. The modification of non-gold ApoDox with antibodies caused no change in its targeting properties. It can therefore be concluded that the demonstrated procedure allows us to create nanocarrier with enhanced targeting properties, suitable for nanomedicine.Keywords: apoferritin, doxorubicin, nanocarrier, targeting antibodies
Procedia PDF Downloads 3893801 Graph Codes - 2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval
Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje
Abstract:
Multimedia Indexing and Retrieval is generally designed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, especially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelization. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.Keywords: indexing, retrieval, multimedia, graph algorithm, graph code
Procedia PDF Downloads 1643800 An Optimal Algorithm for Finding (R, Q) Policy in a Price-Dependent Order Quantity Inventory System with Soft Budget Constraint
Authors: S. Hamid Mirmohammadi, Shahrazad Tamjidzad
Abstract:
This paper is concerned with the single-item continuous review inventory system in which demand is stochastic and discrete. The budget consumed for purchasing the ordered items is not restricted but it incurs extra cost when exceeding specific value. The unit purchasing price depends on the quantity ordered under the all-units discounts cost structure. In many actual systems, the budget as a resource which is occupied by the purchased items is limited and the system is able to confront the resource shortage by charging more costs. Thus, considering the resource shortage costs as a part of system costs, especially when the amount of resource occupied by the purchased item is influenced by quantity discounts, is well motivated by practical concerns. In this paper, an optimization problem is formulated for finding the optimal (R, Q) policy, when the system is influenced by the budget limitation and a discount pricing simultaneously. Properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (R, Q) policy which minimizes the expected system costs .Keywords: (R, Q) policy, stochastic demand, backorders, limited resource, quantity discounts
Procedia PDF Downloads 6413799 Minimizing Learning Difficulties in Teaching Mathematics
Authors: Hari Sharan Pandit
Abstract:
Mathematics teaching in Nepal has been centralized and guided by the notion of transfer of knowledge and skills from teachers to students. The overemphasis on an algorithm-centric approach of mathematics teaching and the focus on ‘rote–learning’ as the ultimate way of solving mathematical problems since the early years of schooling have been creating severe problems in school-level mathematics in Nepal. In this context, the author argues that students should learn real-world mathematical problems through various interesting, creative and collaborative, as well as artistic and alternative ways of knowing. The collaboration-incorporated pedagogy is an distinct pedagogical approach that offers a better alternative as an integrated and interdisciplinary approach to learning that encourages students to think more broadly and critically about real-world problems. The paper, as a summarized report of action research designed, developed and implemented by the author, focuses on the needs and usefulness of collaboration-incorporated pedagogy in the Nepali context to make mathematics teaching more meaningful for producing creative and critical citizens. This paper is useful for mathematics teachers, teacher educators and researchers who argue on arts integration in mathematics teaching.Keywords: algorithm-centric, rote-learning, collaboration - incorporated pedagogy, action research
Procedia PDF Downloads 163798 Measuring Delay Using Software Defined Networks: Limitations, Challenges, and Suggestions for Openflow
Authors: Ahmed Alutaibi, Ganti Sudhakar
Abstract:
Providing better Quality-of-Service (QoS) to end users has been a challenging problem for researchers and service providers. Building applications relying on best effort network protocols hindered the adoption of guaranteed service parameters and, ultimately, Quality of Service. The introduction of Software Defined Networking (SDN) opened the door for a new paradigm shift towards a more controlled programmable configurable behavior. Openflow has been and still is the main implementation of the SDN vision. To facilitate better QoS for applications, the network must calculate and measure certain parameters. One of those parameters is the delay between the two ends of the connection. Using the power of SDN and the knowledge of application and network behavior, SDN networks can adjust to different conditions and specifications. In this paper, we use the capabilities of SDN to implement multiple algorithms to measure delay end-to-end not only inside the SDN network. The results of applying the algorithms on an emulated environment show that we can get measurements close to the emulated delay. The results also show that depending on the algorithm, load on the network and controller can differ. In addition, the transport layer handshake algorithm performs best among the tested algorithms. Out of the results and implementation, we show the limitations of Openflow and develop suggestions to solve them.Keywords: software defined networking, quality of service, delay measurement, openflow, mininet
Procedia PDF Downloads 1663797 Comparison of Deep Brain Stimulation Targets in Parkinson's Disease: A Systematic Review
Authors: Hushyar Azari
Abstract:
Aim and background: Deep brain stimulation (DBS) is regarded as an important therapeutic choice for Parkinson's disease (PD). The two most common targets for DBS are the subthalamic nucleus (STN) and globus pallidus (GPi). This review was conducted to compare the clinical effectiveness of these two targets. Methods: A systematic literature search in electronic databases: Embase, Cochrane Library and PubMed were restricted to English language publications 2010 to 2021. Specified MeSH terms were searched in all databases. Studies which evaluated the Unified Parkinson's Disease Rating Scale (UPDRS) III were selected by meeting the following criteria: (1) compared both GPi and STN DBS; (2) had at least three months follow-up period; (3)at least five participants in each group; (4)conducted after 2010. Study quality assessment was performed using the Modified Jadad Scale. Results: 3577 potentially relevant articles were identified, of these, 3569 were excluded based on title and abstract, duplicate and unsuitable article removal. Eight articles satisfied the inclusion criteria and were scrutinized (458 PD patients). According to Modified Jadad Scale, the majority of included studies had low evidence quality which was a limitation of this review. 5 studies reported no statistically significant between-group difference for improvements in UPDRS ш scores. At the same time, there were some results in terms of pain, action tremor, rigidity, and urinary symptoms, which indicated that STN DBS might be a better choice. Regarding the adverse effects, GPi was superior. Conclusion: It is clear that other larger randomized clinical trials with longer follow-up periods and control groups are needed to decide which target is more efficient for deep brain stimulation in Parkinson’s disease and imposes fewer adverse effects on the patients. Meanwhile, STN seems more reasonable according to the results of this systematic review.Keywords: brain stimulation, globus pallidus, Parkinson's disease, subthalamic nucleus
Procedia PDF Downloads 1803796 Intelligent Control of Doubly Fed Induction Generator Wind Turbine for Smart Grid
Authors: Amal A. Hassan, Faten H. Fahmy, Abd El-Shafy A. Nafeh, Hosam K. M. Youssef
Abstract:
Due to the growing penetration of wind energy into the power grid, it is very important to study its interactions with the power system and to provide good control technique in order to deliver high quality power. In this paper, an intelligent control methodology is proposed for optimizing the controllers’ parameters of doubly fed induction generator (DFIG) based wind turbine generation system (WTGS). The genetic algorithm (GA) and particle swarm optimization (PSO) are employed and compared for the parameters adaptive tuning of the proposed proportional integral (PI) multiple controllers of the back to back converters of the DFIG based WTGS. For this purpose, the dynamic model of WTGS with DFIG and its associated controllers is presented. Furthermore, the simulation of the system is performed using MATLAB/SIMULINK and SIMPOWERSYSTEM toolbox to illustrate the performance of the optimized controllers. Finally, this work is validated to 33-bus test radial system to show the interaction between wind distributed generation (DG) systems and the distribution network.Keywords: DFIG wind turine, intelligent control, distributed generation, particle swarm optimization, genetic algorithm
Procedia PDF Downloads 268