Search results for: modified simplex algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5895

Search results for: modified simplex algorithm

3555 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 128
3554 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 587
3553 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 283
3552 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 250
3551 Polysaccharide Polyelectrolyte Complexation: An Engineering Strategy for the Development of Commercially Viable Sustainable Materials

Authors: Jeffrey M. Catchmark, Parisa Nazema, Caini Chen, Wei-Shu Lin

Abstract:

Sustainable and environmentally compatible materials are needed for a wide variety of volume commercial applications. Current synthetic materials such as plastics, fluorochemicals (such as PFAS), adhesives and resins in form of sheets, laminates, coatings, foams, fibers, molded parts and composites are used for countless products such as packaging, food handling, textiles, biomedical, construction, automotive and general consumer devices. Synthetic materials offer distinct performance advantages including stability, durability and low cost. These attributes are associated with the physical and chemical properties of these materials that, once formed, can be resistant to water, oils, solvents, harsh chemicals, salt, temperature, impact, wear and microbial degradation. These advantages become disadvantages when considering the end of life of these products which generate significant land and water pollution when disposed of and few are recycled. Agriculturally and biologically derived polymers offer the potential of remediating these environmental and life-cycle difficulties, but face numerous challenges including feedstock supply, scalability, performance and cost. Such polymers include microbial biopolymers like polyhydroxyalkanoates and polyhydroxbutirate; polymers produced using biomonomer chemical synthesis like polylactic acid; proteins like soy, collagen and casein; lipids like waxes; and polysaccharides like cellulose and starch. Although these materials, and combinations thereof, exhibit the potential for meeting some of the performance needs of various commercial applications, only cellulose and starch have both the production feedstock volume and cost to compete with petroleum derived materials. Over 430 million tons of plastic is produced each year and plastics like low density polyethylene cost ~$1500 to $1800 per ton. Over 400 million tons of cellulose and over 100 million tons of starch are produced each year at a volume cost as low as ~$500 to $1000 per ton with the capability of increased production. Cellulose and starches, however, are hydroscopic materials that do not exhibit the needed performance in most applications. Celluloses and starches can be chemically modified to contain positive and negative surface charges and such modified versions of these are used in papermaking, foods and cosmetics. Although these modified polysaccharides exhibit the same performance limitations, recent research has shown that composite materials comprised of cationic and anionic polysaccharides in polyelectrolyte complexation exhibit significantly improved performance including stability in diverse environments. Moreover, starches with added plasticizers can exhibit thermoplasticity, presenting the possibility of improved thermoplastic starches when comprised of starches in polyelectrolyte complexation. In this work, the potential for numerous volume commercial products based on polysaccharide polyelectrolyte complexes (PPCs) will be discussed, including the engineering design strategy used to develop them. Research results will be detailed including the development and demonstration of starch PPC compositions for paper coatings to replace PFAS; adhesives; foams for packaging, insulation and biomedical applications; and thermoplastic starches. In addition, efforts to demonstrate the potential for volume manufacturing with industrial partners will be discussed.

Keywords: biomaterials engineering, commercial materials, polysaccharides, sustainable materials

Procedia PDF Downloads 19
3550 Impact of Surface Roughness on Light Absorption

Authors: V. Gareyan, Zh. Gevorkian

Abstract:

We study oblique incident light absorption in opaque media with rough surfaces. An analytical approach with modified boundary conditions taking into account the surface roughness in metallic or dielectric films has been discussed. Our approach reveals interference-linked terms that modify the absorption dependence on different characteristics. We have discussed the limits of our approach that hold valid from the visible to the microwave region. Polarization and angular dependences of roughness-induced absorption are revealed. The existence of an incident angle or a wavelength for which the absorptance of a rough surface becomes equal to that of a flat surface is predicted. Based on this phenomenon, a method of determining roughness correlation length is suggested.

Keywords: light, absorption, surface, roughness

Procedia PDF Downloads 55
3549 Modification of Li-Rich Layered Li1.2Mn0.54Ni0.13Co0.13O2 Cathode Material

Authors: Liu Li, Kim Seng Lee, Li Lu

Abstract:

The high-energy-density Li-rich layered materials are promising cathode materials for the next-generation high-performance lithium-ion batteries. The relatively low rate capability is one of the major problems that limit their practical application. In this work, Li-rich layered Li1.2Mn0.54Ni0.13Co0.13O2 cathode material synthesized by coprecipitation method is further modified by F doping or surface treatment to enhance its cycling stability as well as rate capability.

Keywords: Li-ion battery, Li-rich layered cathode material, phase transformation, cycling stability, rate capacility

Procedia PDF Downloads 358
3548 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 126
3547 Development of Biosensor Chip for Detection of Specific Antibodies to HSV-1

Authors: Zatovska T. V., Nesterova N. V., Baranova G. V., Zagorodnya S. D.

Abstract:

In recent years, biosensor technologies based on the phenomenon of surface plasmon resonance (SPR) are becoming increasingly used in biology and medicine. Their application facilitates exploration in real time progress of binding of biomolecules and identification of agents that specifically interact with biologically active substances immobilized on the biosensor surface (biochips). Special attention is paid to the use of Biosensor analysis in determining the antibody-antigen interaction in the diagnostics of diseases caused by viruses and bacteria. According to WHO, the diseases that are caused by the herpes simplex virus (HSV), take second place (15.8%) after influenza as a cause of death from viral infections. Current diagnostics of HSV infection include PCR and ELISA assays. The latter allows determination the degree of immune response to viral infection and respective stages of its progress. In this regard, the searches for new and available diagnostic methods are very important. This work was aimed to develop Biosensor chip for detection of specific antibodies to HSV-1 in the human blood serum. The proteins of HSV1 (strain US) were used as antigens. The viral particles were accumulated in cell culture MDBK and purified by differential centrifugation in cesium chloride density gradient. Analysis of the HSV1 proteins was performed by polyacrylamide gel electrophoresis and ELISA. The protein concentration was measured using De Novix DS-11 spectrophotometer. The device for detection of antigen-antibody interactions was an optoelectronic two-channel spectrometer ‘Plasmon-6’, using the SPR phenomenon in the Krechman optical configuration. It was developed at the Lashkarev Institute of Semiconductor Physics of NASU. The used carrier was a glass plate covered with 45 nm gold film. Screening of human blood serums was performed using the test system ‘HSV-1 IgG ELISA’ (GenWay, USA). Development of Biosensor chip included optimization of conditions of viral antigen sorption and analysis steps. For immobilization of viral proteins 0.2% solution of Dextran 17, 200 (Sigma, USA) was used. Sorption of antigen took place at 4-8°C within 18-24 hours. After washing of chip, three times with citrate buffer (pH 5,0) 1% solution of BSA was applied to block the sites not occupied by viral antigen. It was found direct dependence between the amount of immobilized HSV1 antigen and SPR response. Using obtained biochips, panels of 25 positive and 10 negative for the content of antibodies to HSV-1 human sera were analyzed. The average value of SPR response was 185 a.s. for negative sera and from 312 to. 1264 a.s. for positive sera. It was shown that SPR data were agreed with ELISA results in 96% of samples proving the great potential of SPR in such researches. It was investigated the possibility of biochip regeneration and it was shown that application of 10 mM NaOH solution leads to rupture of intermolecular bonds. This allows reuse the chip several times. Thus, in this study biosensor chip for detection of specific antibodies to HSV1 was successfully developed expanding a range of diagnostic methods for this pathogen.

Keywords: biochip, herpes virus, SPR

Procedia PDF Downloads 417
3546 An Enhanced Support Vector Machine Based Approach for Sentiment Classification of Arabic Tweets of Different Dialects

Authors: Gehad S. Kaseb, Mona F. Ahmed

Abstract:

Arabic Sentiment Analysis (SA) is one of the most common research fields with many open areas. Few studies apply SA to Arabic dialects. This paper proposes different pre-processing steps and a modified methodology to improve the accuracy using normal Support Vector Machine (SVM) classification. The paper works on two datasets, Arabic Sentiment Tweets Dataset (ASTD) and Extended Arabic Tweets Sentiment Dataset (Extended-AATSD), which are publicly available for academic use. The results show that the classification accuracy approaches 86%.

Keywords: Arabic, classification, sentiment analysis, tweets

Procedia PDF Downloads 150
3545 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 319
3544 Introduction to Multi-Agent Deep Deterministic Policy Gradient

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents

Procedia PDF Downloads 26
3543 A Survey of Grammar-Based Genetic Programming and Applications

Authors: Matthew T. Wilson

Abstract:

This paper covers a selection of research utilizing grammar-based genetic programming, and illustrates how context-free grammar can be used to constrain genetic programming. It focuses heavily on grammatical evolution, one of the most popular variants of grammar-based genetic programming, and the way its operators and terminals are specialized and modified from those in genetic programming. A variety of implementations of grammatical evolution for general use are covered, as well as research each focused on using grammatical evolution or grammar-based genetic programming on a single application, or to solve a specific problem, including some of the classically considered genetic programming problems, such as the Santa Fe Trail.

Keywords: context-free grammar, genetic algorithms, genetic programming, grammatical evolution

Procedia PDF Downloads 190
3542 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 137
3541 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 286
3540 Synthesis and Characterization of PVDF, FG, PTFE, and PES Membrane Distillation Modified with Silver Nanoparticles

Authors: Lopez J., Mehrvar M., Quinones E., Suarez A., RomeroC.

Abstract:

The Silver Nanoparticles (AgNP) are used as deliver of heat on surface of Membrane Distillation in order to fight against Thermal Polarization and improving the Desalination Process. In this study AgNPwere deposited by dip coating process over PVDF, FG hydrophilic, and PTFE hydrophobic commercial membranes as substrate. Membranes were characterized by SEM, EDS, contact angle, Pore size distributionand using a UV lamp and a thermal camera were measured the performance of heat deliver. The presence of AgNP 50 – 150 nm and the increase in absorption of energy over membrane were verified.

Keywords: silver nanoparticles, membrane distillation, plasmon effect, heat deliver

Procedia PDF Downloads 129
3539 Defect Modes in Multilayered Piezoelectric Structures

Authors: D. G. Piliposyan

Abstract:

Propagation of electro-elastic waves in a piezoelectric waveguide with finite stacks and a defect layer is studied using a modified transfer matrix method. The dispersion equation for a periodic structure consisting of unit cells made up from two piezoelectric materials with metallized interfaces is obtained. An analytical expression, for the transmission coefficient for a waveguide with finite stacks and a defect layer, that is found can be used to accurately detect and control the position of the passband within a stopband. The result can be instrumental in constructing a tunable waveguide made of layers of different or identical piezoelectric crystals and separated by metallized interfaces.

Keywords: piezoelectric layered structure, periodic phononic crystal, bandgap, bloch waves

Procedia PDF Downloads 226
3538 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 154
3537 A Randomised Controlled Study to Compare Efficacy and Safety of Bupivacaine plus Dexamethasone Versus Bupivacaine plus Fentanyl for Caudal Block in Children

Authors: Ashwini Patil

Abstract:

Caudal block is one of the most commonly used regional anesthetic techniques in children. Currently, fentanyl is used as an adjuvant to bupivacaine to prolong analgesia but fentanyl is a narcotic. Dexamethasone, a glucocorticoid with strong anti-inflammatory effects provides improvement in post-operative analgesia and post-operative side effects. However, its analgesic efficacy and safety in comparison with fentanyl has not been extensively studied. So the objective of this randomized controlled study is to compare dexamethasone with fentanyl as an adjuvant to bupivacaine for caudal block in children in relation to the duration of caudal analgesia, post-operative analgesic requirement and incidence of post-operative nausea and vomiting. This study included 100 children, aged 1–6 years, undergoing lower abdominal surgeries. Patients were randomized into two groups, 50 each to receive a combination of dexamethasone 0.2 mg/kg along with 1 ml/kg bupivacaine 0.25% (group A) or combination of fentanyl (1 ug/kg) along with 1ml/kg bupivacaine 0.25% (group B). In the post-operative period, pain was assessed using a Modified Objective Pain Scale (MOPS) until 12 hr after surgery and rescue analgesia is administered when MOPS score 4 or more is recorded. Residual motor block, number of analgesic doses required within 24 hr after surgery, sedation scores, intra-operative and post-operative hemodynamic variables, post-operative nausea and vomiting (PONV), and other adverse effects were recorded. Data is analysed using unpaired t test and Significance level of P< 0.05 is considered statistically significant. Group A showed a significantly longer time to first analgesic requirement than group B (p<0.05). The number of rescue analgesic doses required in the first 24 h was significantly less in group A (p<0.05). Group A showed significantly lower MOPS scores than group B(p<0.05). Intra-operative and post-operative hemodynamic variables, Modified Bromage Scale scores, and sedation scores were comparable in both the groups. Group A showed significantly fewer incidences of PONV compared with group B(p<0.05). This study reveals that adding dexamethasone to bupivacaine prolongs the duration of postoperative analgesia and decreases the incidence of PONV as compared to combination of fentanyl to bupivacaine after a caudal block in pediatric patients.

Keywords: bupivacaine, caudal analgesia, dexamethasone, pediatric

Procedia PDF Downloads 209
3536 Dust Ion Acoustic Shock Waves in Dissipative Superthermal Plasmas

Authors: Hamid Reza Pakzad

Abstract:

In this paper, the properties of dust-ion-acoustic (DIA) shock waves in an unmagnetized dusty plasma, whose constituents are inertial ions, superthermal electrons, and stationary dust particles, are investigated by employing the reductive perturbation method. The dissipation is taken into account the kinematic viscosity among the plasma constituents. It is shown that the basic features of DIA shock waves are significantly modified by the effects of electron superthermality and ion kinematic viscosity.

Keywords: reductive perturbation method, dust ion acoustic shock wave, superthermal electron, dissipative plasmas

Procedia PDF Downloads 314
3535 Design of Agricultural Machinery Factory Facility Layout

Authors: Nilda Tri Putri, Muhammad Taufik

Abstract:

Tools and agricultural machinery (Alsintan) is a tool used in agribusiness activities. Alsintan used to change the traditional farming systems generally use manual equipment into modern agriculture with mechanization. CV Nugraha Chakti Consultant make an action plan for industrial development Alsintan West Sumatra in 2012 to develop medium industries of Alsintan become a major industry of Alsintan, one of efforts made is increase the production capacity of the industry Alsintan. Production capacity for superior products as hydrotiller and threshers set each for 2.000 units per year. CV Citra Dragon as one of the medium industry alsintan in West Sumatra has a plan to relocate the existing plant to meet growing consumer demand each year. Increased production capacity and plant relocation plan has led to a change in the layout; therefore need to design the layout of the plant facility CV Citra Dragon. First step the to design of plant layout is design the layout of the production floor. The design of the production floor layout is done by applying group technology layout. The initial step is to do a machine grouping and part family using the Average Linkage Clustering (ALC) and Rank Order Clustering (ROC). Furthermore done independent work station design and layout design using the Modified Spanning Tree (MST). Alternative selection layout is done to select the best production floor layout between ALC and ROC cell grouping. Furthermore, to design the layout of warehouses, offices and other production support facilities. Activity Relationship Chart methods used to organize the placement of factory facilities has been designed. After structuring plan facilities, calculated cost manufacturing facility plant establishment. Type of layout is used on the production floor layout technology group. The production floor is composed of four cell machinery, assembly area and painting area. The total distance of the displacement of material in a single production amounted to 1120.16 m which means need 18,7minutes of transportation time for one time production. Alsintan Factory has designed a circular flow pattern with 11 facilities. The facilities were designed consisting of 10 rooms and 1 parking space. The measure of factory building is 84 m x 52 m.

Keywords: Average Linkage Clustering (ALC), Rank Order Clustering (ROC), Modified Spanning Tree (MST), Activity Relationship Chart (ARC)

Procedia PDF Downloads 497
3534 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units

Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov

Abstract:

The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.

Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis

Procedia PDF Downloads 275
3533 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE

Procedia PDF Downloads 101
3532 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 403
3531 Resource Allocation and Task Scheduling with Skill Level and Time Bound Constraints

Authors: Salam Saudagar, Ankit Kamboj, Niraj Mohan, Satgounda Patil, Nilesh Powar

Abstract:

Task Assignment and Scheduling is a challenging Operations Research problem when there is a limited number of resources and comparatively higher number of tasks. The Cost Management team at Cummins needs to assign tasks based on a deadline and must prioritize some of the tasks as per business requirements. Moreover, there is a constraint on the resources that assignment of tasks should be done based on an individual skill level, that may vary for different tasks. Another constraint is for scheduling the tasks that should be evenly distributed in terms of number of working hours, which adds further complexity to this problem. The proposed greedy approach to solve assignment and scheduling problem first assigns the task based on management priority and then by the closest deadline. This is followed by an iterative selection of an available resource with the least allocated total working hours for a task, i.e. finding the local optimal choice for each task with the goal of determining the global optimum. The greedy approach task allocation is compared with a variant of Hungarian Algorithm, and it is observed that the proposed approach gives an equal allocation of working hours among the resources. The comparative study of the proposed approach is also done with manual task allocation and it is noted that the visibility of the task timeline has increased from 2 months to 6 months. An interactive dashboard app is created for the greedy assignment and scheduling approach and the tasks with more than 2 months horizon that were waiting in a queue without a delivery date initially are now analyzed effectively by the business with expected timelines for completion.

Keywords: assignment, deadline, greedy approach, Hungarian algorithm, operations research, scheduling

Procedia PDF Downloads 149
3530 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling

Authors: A. K. Borah, A. K. Singh

Abstract:

In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.

Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm

Procedia PDF Downloads 529
3529 Curcumin Loaded Modified Chitosan Nanocarrier for Tumor Specificity

Authors: S. T. Kumbhar, M. S. Bhatia, R. C. Khairate

Abstract:

An effective nanodrug delivery system was developed by using chitosan for increased encapsulation efficiency and retarded release of curcumin. Potential ionotropic gelation method was used for the development of chitosan nanoparticles with TPP as cross-linker. The characterization was done for analysis of size, structure, surface morphology, and thermal behavior of synthesized chitosan nanoparticles. The encapsulation efficiency was more than 80%, with improved drug loading capacity. The in-vitro drug release study showed that curcumin release rate was decreased significantly. These chitosan nanoparticles could be a suitable platform for co-delivery of curcumin and anticancer agent for enhanced cytotoxic effect on tumor cells.

Keywords: Curcumin, chitosan, nanoparticles, anticancer activity

Procedia PDF Downloads 178
3528 Using the SMT Solver to Minimize the Latency and to Optimize the Number of Cores in an NoC-DSP Architectures

Authors: Imen Amari, Kaouther Gasmi, Asma Rebaya, Salem Hasnaoui

Abstract:

The problem of scheduling and mapping data flow applications on multi-core architectures is notoriously difficult. This difficulty is related to the rapid evaluation of Telecommunication and multimedia systems accompanied by a rapid increase of user requirements in terms of latency, execution time, consumption, energy, etc. Having an optimal scheduling on multi-cores DSP (Digital signal Processors) platforms is a challenging task. In this context, we present a novel technic and algorithm in order to find a valid schedule that optimizes the key performance metrics particularly the Latency. Our contribution is based on Satisfiability Modulo Theories (SMT) solving technologies which is strongly driven by the industrial applications and needs. This paper, describe a scheduling module integrated in our proposed Workflow which is advised to be a successful approach for programming the applications based on NoC-DSP platforms. This workflow transform automatically a Simulink model to a synchronous dataflow (SDF) model. The automatic transformation followed by SMT solver scheduling aim to minimize the final latency and other software/hardware metrics in terms of an optimal schedule. Also, finding the optimal numbers of cores to be used. In fact, our proposed workflow taking as entry point a Simulink file (.mdl or .slx) derived from embedded Matlab functions. We use an approach which is based on the synchronous and hierarchical behavior of both Simulink and SDF. Whence, results of running the scheduler which exist in the Workflow mentioned above using our proposed SMT solver algorithm refinements produce the best possible scheduling in terms of latency and numbers of cores.

Keywords: multi-cores DSP, scheduling, SMT solver, workflow

Procedia PDF Downloads 288
3527 Surface Modification of Polycarbonate Substrates via Direct Fluorination to Promote the Staining with Methylene Blue

Authors: Haruka Kaji, Jae-Ho Kim, Yonezawa Susumu

Abstract:

The surface of polycarbonate (PC) was modified with fluorine gas at 25℃ and 10-380 Torr for one h. The surface roughness of the fluorinated PC samples was approximately five times larger than that (1.2 nm) of the untreated thing. The results of Fourier transform infrared spectroscopy, and X-ray photoelectron spectroscopy showed that the bonds (e.g., -C=O and C-Hx) derived from raw PC decreased and were converted into fluorinated bonds (e.g., -CFx) after surface fluorination. These fluorinated bonds showed higher electronegativity according to the zeta potential results. Fluorinated PC could be strained with the methylene blue basic dye because of the increased surface roughness and the negatively charged surface.

Keywords: dyeable layer, polycarbonate, surface fluorination, zeta potential

Procedia PDF Downloads 181
3526 Percutaneous Femoral Shortening Over a Nail Using Onsite Smashing Osteotomy Technique

Authors: Rami Jahmani

Abstract:

Closed femoral-shortening osteotomy over an intramedullary nail for the treatment of leg length discrepancy (LLD) is a demanding surgical technique, classically requiring specialized instrumentation (intramedullary saw and chisel). The paper describes a modified surgical technique of performing femoral shortening percutaneously, using a percutaneous multiple drill-hole osteotomy technique to smash the bone, and then, the bone is fixed using intramedullary locked nail. Paper presents the result of performing nine cases of shortening as well.

Keywords: —Femoral shortening, Leg length discrepancy, Minimal invasive, Percutaneous osteotomy.

Procedia PDF Downloads 76