Search results for: modified simplex algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5894

Search results for: modified simplex algorithm

4274 Event Extraction, Analysis, and Event Linking

Authors: Anam Alam, Rahim Jamaluddin Kanji

Abstract:

With the rapid growth of event in everywhere, event extraction has now become an important matter to retrieve the information from the unstructured data. One of the challenging problems is to extract the event from it. An event is an observable occurrence of interaction among entities. The paper investigates the effectiveness of event extraction capabilities of three software tools that are Wandora, Nitro and SPSS. We performed standard text mining techniques of these tools on the data sets of (i) Afghan War Diaries (AWD collection), (ii) MUC4 and (iii) WebKB. Information retrieval measures such as precision and recall which are computed under extensive set of experiments for Event Extraction. The experimental study analyzes the difference between events extracted by the software and human. This approach helps to construct an algorithm that will be applied for different machine learning methods.

Keywords: event extraction, Wandora, nitro, SPSS, event analysis, extraction method, AFG, Afghan War Diaries, MUC4, 4 universities, dataset, algorithm, precision, recall, evaluation

Procedia PDF Downloads 598
4273 Optimization of Electrocoagulation Process Using Duelist Algorithm

Authors: Totok R. Biyanto, Arif T. Mardianto, M. Farid R. R., Luthfi Machmudi, kandi mulakasti

Abstract:

The main objective of this research is optimizing the electrocoagulation process design as a post-treatment for biologically vinasse effluent process. The first principle model with three independent variables that affect the energy consumption of electrocoagulation process i.e. current density, electrode distance, and time of treatment process are chosen as optimized variables. The process condition parameters were determined with the value of pH, electrical conductivity, and temperature of vinasse about 6.5, 28.5 mS/cm, 52 oC, respectively. Aluminum was chosen as the electrode material of electrocoagulation process. Duelist algorithm was used as optimization technique due to its capability to reach a global optimum. The optimization results show that the optimal process can be reached in the conditions of current density of 2.9976 A/m2, electrode distance of 1.5 cm and electrolysis time of 119 min. The optimized energy consumption during process is 34.02 Wh.

Keywords: optimization, vinasse effluent, electrocoagulation, energy consumption

Procedia PDF Downloads 471
4272 Minimization of Propagation Delay in Multi Unmanned Aerial Vehicle Network

Authors: Purva Joshi, Rohit Thanki, Omar Hanif

Abstract:

Unmanned aerial vehicles (UAVs) are becoming increasingly important in various industrial applications and sectors. Nowadays, a multi UAV network is used for specific types of communication (e.g., military) and monitoring purposes. Therefore, it is critical to reducing propagation delay during communication between UAVs, which is essential in a multi UAV network. This paper presents how the propagation delay between the base station (BS) and the UAVs is reduced using a searching algorithm. Furthermore, the iterative-based K-nearest neighbor (k-NN) algorithm and Travelling Salesmen Problem (TSP) algorthm were utilized to optimize the distance between BS and individual UAV to overcome the problem of propagation delay in multi UAV networks. The simulation results show that this proposed method reduced complexity, improved reliability, and reduced propagation delay in multi UAV networks.

Keywords: multi UAV network, optimal distance, propagation delay, K - nearest neighbor, traveling salesmen problem

Procedia PDF Downloads 206
4271 Development of Ecofriendly Ionic Liquid Modified Reverse Phase Liquid Chromatography Method for Simultaneous Determination of Anti-Hyperlipidemic Drugs

Authors: Hassan M. Albishri, Fatimah Al-Shehri, Deia Abd El-Hady

Abstract:

Among the analytical techniques, reverse phase liquid chromatography (RPLC) is currently used in pharmaceutical industry. Ecofriendly analytical chemistry offers the advantages of decreasing the environmental impact with the advantage of increasing operator safety which constituted a topic of industrial interest. Recently, ionic liquids have been successfully used to reduce or eliminate the conventional organic toxic solvents. In the current work, a simple and ecofriendly ionic liquid modified RPLC (IL-RPLC) method has been firstly developed and compared with RPLC under acidic and neutral mobile phase conditions for simultaneous determination of atorvastatin-calcium, rosuvastatin and simvastatin. Several chromatographic effective parameters have been changed in a systematic way. Adequate results have been achieved by mixing ILs with ethanol as a mobile phase under neutral conditions at 1 mL/min flow rate on C18 column. The developed IL-RPLC method has been validated for the quantitative determination of drugs in pharmaceutical formulations. The method showed excellent linearity for analytes in a wide range of concentrations with acceptable precise and accurate data. The current IL-RPLC technique could have vast applications particularly under neutral conditions for simple and greener (bio)analytical applications of pharmaceuticals.

Keywords: ionic liquid, RPLC, anti-hyperlipidemic drugs, ecofriendly

Procedia PDF Downloads 257
4270 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.

Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence

Procedia PDF Downloads 114
4269 An Assessment of the Writing Skills of Reflective Essay of Grade 10 Students in Selected Secondary Schools in Valenzuela City

Authors: Reynald Contreras, Shaina Marie Bho, Kate Roan Dela Cruz, Marvin Dela Cruz

Abstract:

This study was conducted with the aim of determining the skill level of grade ten (Grade 10) students in writing a reflective essay in selected secondary schools of Valenzuela. This research used descriptive and qualitative-quantitative research methods to systematically and accurately describe the level of writing skills of students and used a convenient sampling technique in selecting forty (40) students in grade ten. (Grade 10) at Polo, Wawang Pulo, and Arkong Batong high schools with a total of one hundred and twenty (120) students to assess the written reflective essay using modified rubrics developed based on 6+1 writing traits by Ruth Culham. According to the findings of the study, students at Polo and Wawang Pulo National high schools have low levels of writing skills that need to be developed or are not proficient. Meanwhile, Arkong Bato National High School has achieved a high degree of writing proficiency. Based on the study's findings, the researchers devised a suggested curriculum mapping for the suggested activity or intervention activity that would aid in the development and cultivation of the writing skills of children in grade ten (Grade 10).

Keywords: writing skills, reflective essay, intervention activity, 6+1 writing traits, modified rubrics

Procedia PDF Downloads 123
4268 Integrated Model for Enhancing Data Security Performance in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 480
4267 Saliency Detection Using a Background Probability Model

Authors: Junling Li, Fang Meng, Yichun Zhang

Abstract:

Image saliency detection has been long studied, while several challenging problems are still unsolved, such as detecting saliency inaccurately in complex scenes or suppressing salient objects in the image borders. In this paper, we propose a new saliency detection algorithm in order to solving these problems. We represent the image as a graph with superixels as nodes. By considering appearance similarity between the boundary and the background, the proposed method chooses non-saliency boundary nodes as background priors to construct the background probability model. The probability that each node belongs to the model is computed, which measures its similarity with backgrounds. Thus we can calculate saliency by the transformed probability as a metric. We compare our algorithm with ten-state-of-the-art salient detection methods on the public database. Experimental results show that our simple and effective approach can attack those challenging problems that had been baffling in image saliency detection.

Keywords: visual saliency, background probability, boundary knowledge, background priors

Procedia PDF Downloads 430
4266 Effectiveness of Earthing System in Vertical Configurations

Authors: S. Yunus, A. Suratman, N. Mohamad Nor, M. Othman

Abstract:

This paper presents the measurement and simulation results by Finite Element Method (FEM) for earth resistance (RDC) for interconnected vertical ground rod configurations. The soil resistivity was measured using the Wenner four-pin Method, and RDC was measured using the Fall of Potential (FOP) method, as outlined in the standard. Genetic Algorithm (GA) is employed to interpret the soil resistivity to that of a 2-layer soil model. The same soil resistivity data that were obtained by Wenner four-pin method were used in FEM for simulation. This paper compares the results of RDC obtained by FEM simulation with the real measurement at field site. A good agreement was seen for RDC obtained by measurements and FEM. This shows that FEM is a reliable software to be used for design of earthing systems. It is also found that the parallel rod system has a better performance compared to a similar setup using a grid layout.

Keywords: earthing system, earth electrodes, finite element method, genetic algorithm, earth resistances

Procedia PDF Downloads 111
4265 TMBCoI-SIOT: Trust Management System Based on the Community of Interest for the Social Internet of Things

Authors: Oumaima Ben Abderrahim, Mohamed Houcine Elhedhili, Leila Saidane

Abstract:

In this paper, we propose a trust management system based on clustering architecture for the social internet of things called TMBCO-SIOT. The proposed model integrates numerous factors such as direct and indirect trust; transaction factor; precaution factor; and social modeling of trust. The novelty of our approach can be summed up in two aspects. The first aspect concerns the architecture based on the community of interest (CoT) where each community is headed by an administrator (admin). However, the second aspect is the trust management system that tries to prevent On-Off attacks and mitigates dishonest recommendations using the k-means algorithm and guarantor things. The effectiveness of the proposed system is proved by simulation against malicious nodes.

Keywords: IoT, trust management system, attacks, trust, dishonest recommendations, K-means algorithm

Procedia PDF Downloads 213
4264 Mixtures of Length-Biased Weibull Distributions for Loss Severity Modelling

Authors: Taehan Bae

Abstract:

In this paper, a class of length-biased Weibull mixtures is presented to model loss severity data. The proposed model generalizes the Erlang mixtures with the common scale parameter, and it shares many important modelling features, such as flexibility to fit various data distribution shapes and weak-denseness in the class of positive continuous distributions, with the Erlang mixtures. We show that the asymptotic tail estimate of the length-biased Weibull mixture is Weibull-type, which makes the model effective to fit loss severity data with heavy-tailed observations. A method of statistical estimation is discussed with applications on real catastrophic loss data sets.

Keywords: Erlang mixture, length-biased distribution, transformed gamma distribution, asymptotic tail estimate, EM algorithm, expectation-maximization algorithm

Procedia PDF Downloads 224
4263 Robust Fault Diagnosis for Wind Turbine Systems Subjected to Multi-Faults

Authors: Sarah Odofin, Zhiwei Gao, Sun Kai

Abstract:

Operations, maintenance and reliability of wind turbines have received much attention over the years due to rapid expansion of wind farms. This paper explores early fault diagnosis scale technique based on a unique scheme of a 5MW wind turbine system that is optimized by genetic algorithm to be very sensitive to faults and resilient to disturbances. A quantitative model based analysis is pragmatic for primary fault diagnosis monitoring assessment to minimize downtime mostly caused by components breakdown and exploit productivity consistency. Simulation results are computed validating the wind turbine model which demonstrates system performance in a practical application of fault type examples. The results show the satisfactory effectiveness of the applied performance investigated in a Matlab/Simulink/Gatool environment.

Keywords: disturbance robustness, fault monitoring and detection, genetic algorithm, observer technique

Procedia PDF Downloads 381
4262 Reconfigurable Efficient IIR Filter Design Using MAC Algorithm

Authors: Rajesh Mehra

Abstract:

In this paper an IIR filter has been designed and simulated on an FPGA. The implementation is based on MAC algorithm which uses multiply-and-accumulate operations IIR filter design implementation. Parallel Pipelined structure is used to implement the proposed IIR Filter taking optimal advantage of the look up table of the FPGA device. The designed filter has been synthesized on DSP slice based FPGA to perform multiplier function of MAC unit. The DSP slices are useful to enhance the speed performance. The developed IIR filter is designed and simulated with MATLAB and synthesized with Xilinx Synthesis Tool (XST), and implemented on Virtex 5 and Spartan 3 ADSP FPGA devices. The IIR filter implemented on Virtex 5 FPGA can operate at an estimated frequency of 81.5 MHz as compared to 40.5 MHz in case of Spartan 3 ADSP FPGA. The Virtex 5 based implementation also consumes less slices and slice flip flops of target FPGA in comparison to Spartan 3 ADSP based implementation to provide cost effective solution for signal processing applications.

Keywords: butterworth, DSP, IIR, MAC, FPGA

Procedia PDF Downloads 360
4261 Fine Characterization of Glucose Modified Human Serum Albumin by Different Biophysical and Biochemical Techniques at a Range

Authors: Neelofar, Khursheed Alam, Jamal Ahmad

Abstract:

Protein modification in diabetes mellitus may lead to early glycation products (EGPs) or amadori product as well as advanced glycation end products (AGEs). Early glycation involves the reaction of glucose with N-terminal and lysyl side chain amino groups to form Schiff’s base which undergoes rearrangements to form more stable early glycation product known as Amadori product. After Amadori, the reactions become more complicated leading to the formation of advanced glycation end products (AGEs) that interact with various AGE receptors, thereby playing an important role in the long-term complications of diabetes. Millard reaction or nonenzymatic glycation reaction accelerate in diabetes due to hyperglycation and alter serum protein’s structure, their normal functions that lead micro and macro vascular complications in diabetic patients. In this study, Human Serum Albumin (HSA) with a constant concentration was incubated with different concentrations of glucose at 370C for a week. At 4th day, Amadori product was formed that was confirmed by colorimetric method NBT assay and TBA assay which both are authenticate early glycation product. Conformational changes in native as well as all samples of Amadori albumin with different concentrations of glucose were investigated by various biophysical and biochemical techniques. Main biophysical techniques hyperchromacity, quenching of fluorescence intensity, FTIR, CD and SDS-PAGE were used. Further conformational changes were observed by biochemical assays mainly HMF formation, fructoseamine, reduction of fructoseamine with NaBH4, carbonyl content estimation, lysine and arginine residues estimation, ANS binding property and thiol group estimation. This study find structural and biochemical changes in Amadori modified HSA with normal to hyperchronic range of glucose with respect to native HSA. When glucose concentration was increased from normal to chronic range biochemical and structural changes also increased. Highest alteration in secondary and tertiary structure and conformation in glycated HSA was observed at the hyperchronic concentration (75mM) of glucose. Although it has been found that Amadori modified proteins is also involved in secondary complications of diabetes as AGEs but very few studies have been done to analyze the conformational changes in Amadori modified proteins due to early glycation. Most of the studies were found on the structural changes in Amadori protein at a particular glucose concentration but no study was found to compare the biophysical and biochemical changes in HSA due to early glycation with a range of glucose concentration at a constant incubation time. So this study provide the information about the biochemical and biophysical changes occur in Amadori modified albumin at a range of glucose normal to chronic in diabetes. Although many implicates currently in use i.e. glycaemic control, insulin treatment and other chemical therapies that can control many aspects of diabetes. However, even with intensive use of current antidiabetic agents more than 50 % of diabetic patient’s type 2 suffers poor glycaemic control and 18 % develop serious complications within six years of diagnosis. Experimental evidence related to diabetes suggests that preventing the nonenzymatic glycation of relevant proteins or blocking their biological effects might beneficially influence the evolution of vascular complications in diabetic patients or quantization of amadori adduct of HSA by authentic antibodies against HSA-EGPs can be used as marker for early detection of the initiation/progression of secondary complications of diabetes. So this research work may be helpful for the same.

Keywords: diabetes mellitus, glycation, albumin, amadori, biophysical and biochemical techniques

Procedia PDF Downloads 273
4260 Optimization of Tooth Root Profile and Drive Side Pressure Angle to Minimize Bending Stress at Root of Asymmetric Spur Gear Tooth

Authors: Priyakant Vaghela, Jagdish Prajapati

Abstract:

Bending stress at the root of the gear tooth is the very important criteria in gear design and it should be kept the minimum. Minimization of bending stress at the root of the gear tooth is a recent demand from industry. This paper presents an innovative approach to obtain minimum bending stress at the root of a tooth by optimizing tooth root profile and drive side pressure angle. Circular-filleted at the root of the tooth is widely used in the design. Circular fillet creates discontinuity at the root of the tooth. So, at root stress concentration occurs. In order to minimize stress concentration, an important criterion is a G2 continuity at the blending of the gear tooth. A Bezier curve is used with G2 continuity at the root of asymmetric spur gear tooth. The comparison has been done between normal and modified tooth using ANSYS simulation. Tooth root profile and drive side pressure angle are optimized to minimize bending stress at the root of the tooth of the asymmetric involute spur gear. Von Mises stress of optimized profile is analyzed and compared with normal profile symmetric gear. Von Mises stress is reducing by 31.27% by optimization of drive side pressure angle and root profile. Stress concentration of modified gear was significantly reduced.

Keywords: asymmetric spur gear tooth, G2 continuity, pressure angle, stress concentration at the root of tooth, tooth root stress

Procedia PDF Downloads 188
4259 Energy Conservation in Heat Exchangers

Authors: Nadia Allouache

Abstract:

Energy conservation is one of the major concerns in the modern high tech era due to the limited amount of energy resources and the increasing cost of energy. Predicting an efficient use of energy in thermal systems like heat exchangers can only be achieved if the second law of thermodynamics is accounted for. The performance of heat exchangers can be substantially improved by many passive heat transfer augmentation techniques. These letters permit to improve heat transfer rate and to increase exchange surface, but on the other side, they also increase the friction factor associated with the flow. This raises the question of how to employ these passive techniques in order to minimize the useful energy. The objective of this present study is to use a porous substrate attached to the walls as a passive enhancement technique in heat exchangers and to find the compromise between the hydrodynamic and thermal performances under turbulent flow conditions, by using a second law approach. A modified k- ε model is used to simulating the turbulent flow in the porous medium and the turbulent shear flow is accounted for in the entropy generation equation. A numerical modeling, based on the finite volume method is employed for discretizing the governing equations. Effects of several parameters are investigated such as the porous substrate properties and the flow conditions. Results show that under certain conditions of the porous layer thickness, its permeability, and its effective thermal conductivity the minimum rate of entropy production is obtained.

Keywords: second law approach, annular heat exchanger, turbulent flow, porous medium, modified model, numerical analysis

Procedia PDF Downloads 288
4258 Use of Interpretable Evolved Search Query Classifiers for Sinhala Documents

Authors: Prasanna Haddela

Abstract:

Document analysis is a well matured yet still active research field, partly as a result of the intricate nature of building computational tools but also due to the inherent problems arising from the variety and complexity of human languages. Breaking down language barriers is vital in enabling access to a number of recent technologies. This paper investigates the application of document classification methods to new Sinhalese datasets. This language is geographically isolated and rich with many of its own unique features. We will examine the interpretability of the classification models with a particular focus on the use of evolved Lucene search queries generated using a Genetic Algorithm (GA) as a method of document classification. We will compare the accuracy and interpretability of these search queries with other popular classifiers. The results are promising and are roughly in line with previous work on English language datasets.

Keywords: evolved search queries, Sinhala document classification, Lucene Sinhala analyzer, interpretable text classification, genetic algorithm

Procedia PDF Downloads 114
4257 Application of Neural Networks to Predict Changing the Diameters of Bubbles in Pool Boiling Distilled Water

Authors: V. Nikkhah Rashidabad, M. Manteghian, M. Masoumi, S. Mousavian, D. Ashouri

Abstract:

In this research, the capability of neural networks in modeling and learning complicated and nonlinear relations has been used to develop a model for the prediction of changes in the diameter of bubbles in pool boiling distilled water. The input parameters used in the development of this network include element temperature, heat flux, and retention time of bubbles. The test data obtained from the experiment of the pool boiling of distilled water, and the measurement of the bubbles form on the cylindrical element. The model was developed based on training algorithm, which is typologically of back-propagation type. Considering the correlation coefficient obtained from this model is 0.9633. This shows that this model can be trusted for the simulation and modeling of the size of bubble and thermal transfer of boiling.

Keywords: bubble diameter, heat flux, neural network, training algorithm

Procedia PDF Downloads 447
4256 SAP-Reduce: Staleness-Aware P-Reduce with Weight Generator

Authors: Lizhi Ma, Chengcheng Hu, Fuxian Wong

Abstract:

Partial reduce (P-Reduce) has set a state-of-the-art performance on distributed machine learning in the heterogeneous environment over the All-Reduce architecture. The dynamic P-Reduce based on the exponential moving average (EMA) approach predicts all the intermediate model parameters, which raises unreliability. It is noticed that the approximation trick leads the wrong way to obtaining model parameters in all the nodes. In this paper, SAP-Reduce is proposed, which is a variant of the All-Reduce distributed training model with staleness-aware dynamic P-Reduce. SAP-Reduce directly utilizes the EMA-like algorithm to generate the normalized weights. To demonstrate the effectiveness of the algorithm, the experiments are set based on a number of deep learning models, comparing the single-step training acceleration ratio and convergence time. It is found that SAP-Reduce simplifying dynamic P-Reduce outperforms the intermediate approximation one. The empirical results show SAP-Reduce is 1.3× −2.1× faster than existing baselines.

Keywords: collective communication, decentralized distributed training, machine learning, P-Reduce

Procedia PDF Downloads 34
4255 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms

Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan

Abstract:

Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving k-means clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.

Keywords: acute leukaemia images, clustering algorithms, image segmentation, moving k-means

Procedia PDF Downloads 292
4254 Effect of Punch Diameter on Optimal Loading Profiles in Hydromechanical Deep Drawing Process

Authors: Mehmet Halkaci, Ekrem Öztürk, Mevlüt Türköz, H. Selçuk Halkacı

Abstract:

Hydromechanical deep drawing (HMD) process is an advanced manufacturing process used to form deep parts with only one forming step. In this process, sheet metal blank can be drawn deeper by means of fluid pressure acting on sheet surface in the opposite direction of punch movement. High limiting drawing ratio, good surface quality, less springback characteristic and high dimensional accuracy are some of the advantages of this process. The performance of the HMD process is affected by various process parameters such as fluid pressure, blank holder force, punch-die radius, pre-bulging pressure and height, punch diameter, friction between sheet-die and sheet-punch. The fluid pressure and bank older force are the main loading parameters and affect the formability of HMD process significantly. The punch diameter also influences the limiting drawing ratio (the ratio of initial sheet diameter to punch diameter) of the sheet metal blank. In this research, optimal loading (fluid pressure and blank holder force) profiles were determined for AA 5754-O sheet material through fuzzy control algorithm developed in previous study using LS-DYNA finite element analysis (FEA) software. In the preceding study, the fuzzy control algorithm was developed utilizing geometrical criteria such as thinning and wrinkling. In order to obtain the final desired part with the developed algorithm in terms of the punch diameter requested, the effect of punch diameter, which is the one of the process parameters, on loading profiles was investigated separately using blank thickness of 1 mm. Thus, the practicality of the previously developed fuzzy control algorithm with different punch diameters was clarified. Also, thickness distributions of the sheet metal blank along a curvilinear distance were compared for the FEA in which different punch diameters were used. Consequently, it was found that the use of different punch diameters did not affect the optimal loading profiles too much.

Keywords: Finite Element Analysis (FEA), fuzzy control, hydromechanical deep drawing, optimal loading profiles, punch diameter

Procedia PDF Downloads 432
4253 Incorporation of Noncanonical Amino Acids into Hard-to-Express Antibody Fragments: Expression and Characterization

Authors: Hana Hanaee-Ahvaz, Monika Cserjan-Puschmann, Christopher Tauer, Gerald Striedner

Abstract:

Incorporation of noncanonical amino acids (ncAA) into proteins has become an interesting topic as proteins featured with ncAAs offer a wide range of different applications. Nowadays, technologies and systems exist that allow for the site-specific introduction of ncAAs in vivo, but the efficient production of proteins modified this way is still a big challenge. This is especially true for 'hard-to-express' proteins where low yields are encountered even with the native sequence. In this study, site-specific incorporation of azido-ethoxy-carbonyl-Lysin (azk) into an anti-tumor-necrosis-factor-α-Fab (FTN2) was investigated. According to well-established parameters, possible site positions for ncAA incorporation were determined, and corresponding FTN2 genes were constructed. Each of the modified FTN2 variants has one amber codon for azk incorporated either in its heavy or light chain. The expression level for all variants produced was determined by ELISA, and all azk variants could be produced with a satisfactory yield in the range of 50-70% of the original FTN2 variant. In terms of expression yield, neither the azk incorporation position nor the subunit modified (heavy or light chain) had a significant effect. We confirmed correct protein processing and azk incorporation by mass spectrometry analysis, and antigen-antibody interaction was determined by surface plasmon resonance analysis. The next step is to characterize the effect of azk incorporation on protein stability and aggregation tendency via differential scanning calorimetry and light scattering, respectively. In summary, the incorporation of ncAA into our Fab candidate FTN2 worked better than expected. The quantities produced allowed a detailed characterization of the variants in terms of their properties, and we can now turn our attention to potential applications. By using click chemistry, we can equip the Fabs with additional functionalities and make them suitable for a wide range of applications. We will now use this option in a first approach and develop an assay that will allow us to follow the degradation of the recombinant target protein in vivo. Special focus will be laid on the proteolytic activity in the periplasm and how it is influenced by cultivation/induction conditions.

Keywords: degradation, FTN2, hard-to-express protein, non-canonical amino acids

Procedia PDF Downloads 236
4252 Smartphone Video Source Identification Based on Sensor Pattern Noise

Authors: Raquel Ramos López, Anissa El-Khattabi, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results.

Keywords: digital video, forensics analysis, key frame, mobile device, PRNU, sensor noise, source identification

Procedia PDF Downloads 429
4251 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić

Abstract:

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation

Procedia PDF Downloads 386
4250 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 337
4249 Phosphate Capture from Sewage by Hafnium-Modified Fe₃O₄@SiO₂ Superparamagnetic Nanoparticles: Adsorption Capacity, Selectivity, Reusability Analysis and Mechanistic Insights

Authors: Qian Zhao

Abstract:

With global increasing demand for phosphorus and intensively depleting reserves, it is urgent need to explore innovative approaches towards capturing phosphate from sewage, which is also an effective way to reduce phosphate contamination and avoid eutrophication of water bodies. In the present article, the superparamagnetic nano-sorbents containing Fe₃O₄ core and hafnium-modified MgAl/MgFe layered double hydroxides shell (abbreviated as MgAlHf-NP and MgFeHf-NP) was developed using a simple and low-cost synthesis protocol. The obtained Hf-coated nano-materials showed well-defined crystal structure and sufficient saturation magnetization and exhibited higher adsorption capacity for phosphate. Meanwhile, high selectivity was also confirmed since coexisting foreign anions and biomacromolecules showed little competitive effect on phosphate adsorption. The enhancement via doping with Hf should be explained by the stronger ligand complexation built by the pair of hard acid Hf ion and hard base phosphate that matched up the bonding preferences. Sufficient OH⁻ concentration and clear pH shift during the desorption/regeneration allowed for regeneration rate of higher than 90% after 5 cycles of adsorption desorption. This article attempts to provide a competitive candidate for phosphate-capture, which is highly effective, easily separable and repeatedly usable.

Keywords: phosphate recovery, nanoparticles, superparamagnetic, adsorption, reusability

Procedia PDF Downloads 141
4248 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima

Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez

Abstract:

Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.

Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis

Procedia PDF Downloads 323
4247 Analysis of CO₂ Two-Phase Ejector with Taguchi and ANOVA Optimization and Refrigerant Selection with Enviro Economic Concerns by TOPSIS Analysis

Authors: Karima Megdouli, Bourhan tachtouch

Abstract:

Ejector refrigeration cycles offer an alternative to conventional systems for producing cold from low-temperature heat. In this article, a thermodynamic model is presented. This model has the advantage of simplifying the calculation algorithm and describes the complex double-throttling mechanism that occurs in the ejector. The model assumption and calculation algorithm are presented first. The impact of each efficiency is evaluated. Validation is performed on several data sets. The ejector model is then used to simulate a RES (refrigeration ejector system), to validate its robustness and suitability for use in predicting thermodynamic cycle performance. A Taguchi and ANOVA optimization is carried out on a RES. TOPSIS analysis was applied to decide the optimum refrigerants with cost, safety, environmental and enviro economic concerns along with thermophysical properties.

Keywords: ejector, velocity distribution, shock circle, Taguchi and ANOVA optimization, TOPSIS analysis

Procedia PDF Downloads 90
4246 Development of Chronic Obstructive Pulmonary Disease (COPD) Proforma (E-ICP) to Improve Guideline Adherence in Emergency Department: Modified Delphi Study

Authors: Hancy Issac, Gerben Keijzers, Ian Yang, Clint Moloney, Jackie Lea, Melissa Taylor

Abstract:

Introduction: Chronic obstructive pulmonary disease guideline non-adherence is associated with a reduction in health-related quality of life in patients (HRQoL). Improving guideline adherence has the potential to mitigate fragmented care thereby sustaining pulmonary function, preventing acute exacerbations, reducing economic health burdens, and enhancing HRQoL. The development of an electronic proforma stemming from expert consensus, including digital guideline resources and direct interdisciplinary referrals is hypothesised to improve guideline adherence and patient outcomes for emergency department (ED) patients with COPD. Aim: The aim of this study was to develop consensus among ED and respiratory staff for the correct composition of a COPD electronic proforma that aids in guideline adherence and management in the ED. Methods: This study adopted a mixed-method design to develop the most important indicators of care in the ED. The study involved three phases: (1) a systematic literature review and qualitative interdisciplinary staff interviews to assess barriers and solutions for guideline adherence and qualitative interdisciplinary staff interviews, (2) a modified Delphi panel to select interventions for the proforma, and (3) a consensus process through three rounds of scoring through a quantitative survey (ED and Respiratory consensus) and qualitative thematic analysis on each indicator. Results: The electronic proforma achieved acceptable and good internal consistency through all iterations from national emergency department and respiratory department interdisciplinary experts. Cronbach’s alpha score for internal consistency (α) in iteration 1 emergency department cohort (EDC) (α = 0.80 [CI = 0.89%]), respiratory department cohort (RDC) (α = 0.95 [CI = 0.98%]). Iteration 2 reported EDC (α = 0.85 [CI = 0.97%]) and RDC (α = 0.86 [CI = 0.97%]). Iteration 3 revealed EDC (α = 0.73 [CI = 0.91%]) and RDC (α = 0.86 [CI = 0.95%]), respectively. Conclusion: Electronic proformas have the potential to facilitate direct referrals from the ED leading to reduced hospital admissions, reduced length of hospital stays, holistic care, improved health care and quality of life and improved interdisciplinary guideline adherence.

Keywords: COPD, electronic proforma, modified delphi study, interdisciplinary, guideline adherence, COPD-X plan

Procedia PDF Downloads 63
4245 Research on Transmission Parameters Determination Method Based on Dynamic Characteristic Analysis

Authors: Baoshan Huang, Fanbiao Bao, Bing Li, Lianghua Zeng, Yi Zheng

Abstract:

Parameter control strategy based on statistical characteristics can analyze the choice of the transmission ratio of an automobile transmission. According to the difference of the transmission gear, the number and spacing of the gear can be determined. Transmission ratio distribution of transmission needs to satisfy certain distribution law. According to the statistic characteristics of driving parameters, the shift control strategy of the vehicle is analyzed. CVT shift schedule adjustment algorithm based on statistical characteristic parameters can be seen from the above analysis, if according to the certain algorithm to adjust the size of, can adjust the target point are in the best efficiency curve and dynamic curve between the location, to alter the vehicle characteristics. Based on the dynamic characteristics and the practical application of the vehicle, this paper presents the setting scheme of the transmission ratio.

Keywords: vehicle dynamics, transmission ratio, transmission parameters, statistical characteristics

Procedia PDF Downloads 406