Search results for: robust optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4425

Search results for: robust optimization

2865 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 221
2864 A Decision Support System for Flight Disruptions Management

Authors: Burak Erkayman, Emin Gundogar, Hayrettin Evirgen, Murat Sarı

Abstract:

With the increasing competition in recent years, airline companies tend to manage their operations aiming fewer losses in a robust manner. Airline operations are complex operations and have the necessity of being performed just in time and more knock-on relevant elements in the event of a disruption. In this study a knowledge based decision support system is suggested and software is developed. The developed software includes knowledge bases which are based on expert experience and government regulations, model bases and data bases. The results of the suggested approach are presented and improvable aspects of the approach are discussed.

Keywords: knowledge based systems, irregular operations, decision support systems, flight disruptions management

Procedia PDF Downloads 302
2863 Measuring Multi-Class Linear Classifier for Image Classification

Authors: Fatma Susilawati Mohamad, Azizah Abdul Manaf, Fadhillah Ahmad, Zarina Mohamad, Wan Suryani Wan Awang

Abstract:

A simple and robust multi-class linear classifier is proposed and implemented. For a pair of classes of the linear boundary, a collection of segments of hyper planes created as perpendicular bisectors of line segments linking centroids of the classes or part of classes. Nearest Neighbor and Linear Discriminant Analysis are compared in the experiments to see the performances of each classifier in discriminating ripeness of oil palm. This paper proposes a multi-class linear classifier using Linear Discriminant Analysis (LDA) for image identification. Result proves that LDA is well capable in separating multi-class features for ripeness identification.

Keywords: multi-class, linear classifier, nearest neighbor, linear discriminant analysis

Procedia PDF Downloads 517
2862 Water Re-Use Optimization in a Sugar Platform Biorefinery Using Municipal Solid Waste

Authors: Leo Paul Vaurs, Sonia Heaven, Charles Banks

Abstract:

Municipal solid waste (MSW) is a virtually unlimited source of lignocellulosic material in the form of a waste paper/cardboard mixture which can be converted into fermentable sugars via cellulolytic enzyme hydrolysis in a biorefinery. The extraction of the lignocellulosic fraction and its preparation, however, are energy and water demanding processes. The waste water generated is a rich organic liquor with a high Chemical Oxygen Demand that can be partially cleaned while generating biogas in an Upflow Anaerobic Sludge Blanket bioreactor and be further re-used in the process. In this work, an experiment was designed to determine the critical contaminant concentrations in water affecting either anaerobic digestion or enzymatic hydrolysis by simulating multiple water re-circulations. It was found that re-using more than 16.5 times the same water could decrease the hydrolysis yield by up to 65 % and led to a complete granules desegregation. Due to the complexity of the water stream, the contaminant(s) responsible for the performance decrease could not be identified but it was suspected to be caused by sodium, potassium, lipid accumulation for the anaerobic digestion (AD) process and heavy metal build-up for enzymatic hydrolysis. The experimental data were incorporated into a Water Pinch technology based model that was used to optimize the water re-utilization in the modelled system to reduce fresh water requirement and wastewater generation while ensuring all processes performed at optimal level. Multiple scenarios were modelled in which sub-process requirements were evaluated in term of importance, operational costs and impact on the CAPEX. The best compromise between water usage, AD and enzymatic hydrolysis yield was determined for each assumed contaminant degradations by anaerobic granules. Results from the model will be used to build the first MSW based biorefinery in the USA.

Keywords: anaerobic digestion, enzymatic hydrolysis, municipal solid waste, water optimization

Procedia PDF Downloads 306
2861 Stability Analysis of a Low Power Wind Turbine for the Simultaneous Generation of Energy through Two Electric Generators

Authors: Daniel Icaza, Federico Córdova, Chiristian Castro, Fernando Icaza, Juan Portoviejo

Abstract:

In this article, the mathematical model is presented, and simulations were carried out using specialized software such as MATLAB before the construction of a 900-W wind turbine. The present study was conducted with the intention of taking advantage of the rotation of the blades of the wind generator after going through a process of amplification of speed by means of a system of gears to finally mechanically couple two electric generators of similar characteristics. This coupling allows generating a maximum voltage of 6 V in DC for each generator and putting in series the 12 V DC is achieved, which is later stored in batteries and used when the user requires it. Laboratory tests were made to verify the level of power generation produced based on the wind speed at the entrance of the blades.

Keywords: smart grids, wind turbine, modeling, renewable energy, robust control

Procedia PDF Downloads 214
2860 Deproteinization of Moroccan Sardine (Sardina pilchardus) Scales: A Pilot-Scale Study

Authors: F. Bellali, M. Kharroubi, Y. Rady, N. Bourhim

Abstract:

In Morocco, fish processing industry is an important source income for a large amount of by-products including skins, bones, heads, guts, and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Sardina plichardus scales from resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic, and biomedical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. And the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The advancement from lab scale to pilot scale is a critical stage in the technological development. In this study, the optimal condition for the deproteinization which was validated at laboratory scale was employed in the pilot scale procedure. The deproteinization of fish scale was then demonstrated on a pilot scale (2Kg scales, 20l NaOH), resulting in protein content (0,2mg/ml) and hydroxyproline content (2,11mg/l). These results indicated that the pilot-scale showed similar performances to those of lab-scale one.

Keywords: deproteinization, pilot scale, scale, sardine pilchardus

Procedia PDF Downloads 432
2859 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 94
2858 Synthesis of ZnFe₂O₄-AC/CeMOF for Improvement Photodegradation of Textile Dyes Under Visible-light: Optimization and Statistical Study

Authors: Esraa Mohamed El-Fawal

Abstract:

A facile solvothermal procedure was applied to fabricate zinc ferrite nanoparticles (ZnFe₂O₄ NPs). Activated carbon (AC) derived from peanut shells is synthesized using a microwave through the chemical activation method. The ZnFe₂O₄-AC composite is then mixed with a cerium-based metal-organic framework (CeMOF) by solid-state adding to formulate ZnFe₂O₄-AC/CeMOF composite. The synthesized photo materials were tested by scanning/transmission electron microscope (SEM/TEM), Photoluminescence (PL), (XRD) X-Ray diffraction, (FTIR) Fourier transform infrared, (UV-Vis/DRS) ultraviolet-visible/diffuse reflectance spectroscopy. The prepared ZnFe₂O₄-AC/CeMOFphotomaterial shows significantly boosted efficiency for photodegradation of methyl orange /methylene blue (MO/MB) compared with the pristine ZnFe₂O₄ and ZnFe₂O₄-AC composite under the irradiation of visible-light. The favorable ZnFe₂O₄-AC/CeMOFphotocatalyst displays the highest photocatalytic degradation efficiency of MB/MO (R: 91.5-88.6%, consecutively) compared with the other as-prepared materials after 30 min of visible-light irradiation. The apparent reaction rate K: 1.94-1.31 min-1 is also calculated. The boosted photocatalytic proficiency is ascribed to the heterojunction at the interface of prepared photo material that assists the separation of the charge carriers. To reach optimization, statistical analysis using response surface methodology was applied. The effect of independent parameters (such as A (pH), B (irradiation time), and (c) initial pollutants concentration on the response function (%)photodegradation of MB/MO dyes (as examples of azodyes) was investigated via using central composite design. At the optimum condition, the photodegradation efficiency (%) of the MB/MO is 99.8-97.8%, respectively. ZnFe2O₄-AC/CeMOF hybrid reveals good stability over four consecutive cycles.

Keywords: azo-dyes, photo-catalysis, zinc ferrite, response surface methodology

Procedia PDF Downloads 150
2857 Optimization of Sodium Lauryl Surfactant Concentration for Nanoparticle Production

Authors: Oluwatoyin Joseph Gbadeyan, Sarp Adali, Bright Glen, Bruce Sithole

Abstract:

Sodium lauryl surfactant concentration optimization, for nanoparticle production, provided the platform for advanced research studies. Different concentrations (0.05 %, 0.1 %, and 0.2 %) of sodium lauryl surfactant was added to snail shells powder during milling processes for producing CaCO3 at smaller particle size. Epoxy nanocomposites prepared at filler content 2 wt.% synthesized with different volumes of sodium lauryl surfactant were fabricated using a conventional resin casting method. Mechanical properties such as tensile strength, stiffness, and hardness of prepared nanocomposites was investigated to determine the effect of sodium lauryl surfactant concentration on nanocomposite properties. It was observed that the loading of the synthesized nano-calcium carbonate improved the mechanical properties of neat epoxy at lower concentrations of sodium lauryl surfactant 0.05 %. Meaningfully, loading of achatina fulica snail shell nanoparticles manufactures, with small concentrations of sodium lauryl surfactant 0.05 %, increased the neat epoxy tensile strength by 26%, stiffness by 55%, and hardness by 38%. Homogeneous dispersion facilitated, by the addition of sodium lauryl surfactant during milling processes, improved mechanical properties. Research evidence suggests that nano-CaCO3, synthesized from achatina fulica snail shell, possesses suitable reinforcement properties that can be used for nanocomposite fabrication. The evidence showed that adding small concentrations of sodium lauryl surfactant 0.05 %, improved dispersion of nanoparticles in polymetrix material that provided mechanical properties improvement.

Keywords: sodium lauryl surfactant, mechanical properties , achatina fulica snail shel, calcium carbonate nanopowder

Procedia PDF Downloads 131
2856 Calculation of Electronic Structures of Nickel in Interaction with Hydrogen by Density Functional Theoretical (DFT) Method

Authors: Choukri Lekbir, Mira Mokhtari

Abstract:

Hydrogen-Materials interaction and mechanisms can be modeled at nano scale by quantum methods. In this work, the effect of hydrogen on the electronic properties of a cluster material model «nickel» has been studied by using of density functional theoretical (DFT) method. Two types of clusters are optimized: Nickel and hydrogen-nickel system. In the case of nickel clusters (n = 1-6) without presence of hydrogen, three types of electronic structures (neutral, cationic and anionic), have been optimized according to three basis sets calculations (B3LYP/LANL2DZ, PW91PW91/DGDZVP2, PBE/DGDZVP2). The comparison of binding energies and bond lengths of the three structures of nickel clusters (neutral, cationic and anionic) obtained by those basis sets, shows that the results of neutral and anionic nickel clusters are in good agreement with the experimental results. In the case of neutral and anionic nickel clusters, comparing energies and bond lengths obtained by the three bases, shows that the basis set PBE/DGDZVP2 is most suitable to experimental results. In the case of anionic nickel clusters (n = 1-6) with presence of hydrogen, the optimization of the hydrogen-nickel (anionic) structures by using of the basis set PBE/DGDZVP2, shows that the binding energies and bond lengths increase compared to those obtained in the case of anionic nickel clusters without the presence of hydrogen, that reveals the armor effect exerted by hydrogen on the electronic structure of nickel, which due to the storing of hydrogen energy within nickel clusters structures. The comparison between the bond lengths for both clusters shows the expansion effect of clusters geometry which due to hydrogen presence.

Keywords: binding energies, bond lengths, density functional theoretical, geometry optimization, hydrogen energy, nickel cluster

Procedia PDF Downloads 408
2855 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 70
2854 Use Cases Analysis of Free Space Optical Communication System

Authors: Kassem Saab, Fritzen Bart, Yves-Marie Seveque

Abstract:

The deployment of Free Space Optical Communications (FSOC) systems requires the development of robust and reliable Optical Ground Stations (OGS) that can be easily installed and operated. To this end, the Engineering Department of Airbus Defence and Space is actively working on the development of innovative and compact OGS solutions that can be deployed in various environments and provide high-quality connectivity under different atmospheric conditions. This article presents an overview of our recent developments in this field, including an evaluation study of different use cases of the FSOC with respect to different atmospheric conditions. The goal is to provide OGS solutions that are both simple and highly effective, allowing for the deployment of high-speed communication networks in a wide range of scenarios.

Keywords: end to end optical communication, laser propagation, optical ground station, turbulence

Procedia PDF Downloads 79
2853 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 152
2852 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach

Authors: Sina Kazemi, Farshid Torabi, Todd Peterson

Abstract:

Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.

Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity

Procedia PDF Downloads 70
2851 The Vision Baed Parallel Robot Control

Authors: Sun Lim, Kyun Jung

Abstract:

In this paper, we describe the control strategy of high speed parallel robot system with EtherCAT network. This work deals the parallel robot system with centralized control on the real-time operating system such as window TwinCAT3. Most control scheme and algorithm is implemented master platform on the PC, the input and output interface is ported on the slave side. The data is transferred by maximum 20usecond with 1000byte. EtherCAT is very high speed and stable industrial network. The control strategy with EtherCAT is very useful and robust on Ethernet network environment. The developed parallel robot is controlled pre-design nonlinear controller for 6G/0.43 cycle time of pick and place motion tracking. The experiment shows the good design and validation of the controller.

Keywords: parallel robot control, etherCAT, nonlinear control, parallel robot inverse kinematic

Procedia PDF Downloads 554
2850 Artificial Neural Networks and Geographic Information Systems for Coastal Erosion Prediction

Authors: Angeliki Peponi, Paulo Morgado, Jorge Trindade

Abstract:

Artificial Neural Networks (ANNs) and Geographic Information Systems (GIS) are applied as a robust tool for modeling and forecasting the erosion changes in Costa Caparica, Lisbon, Portugal, for 2021. ANNs present noteworthy advantages compared with other methods used for prediction and decision making in urban coastal areas. Multilayer perceptron type of ANNs was used. Sensitivity analysis was conducted on natural and social forces and dynamic relations in the dune-beach system of the study area. Variations in network’s parameters were performed in order to select the optimum topology of the network. The developed methodology appears fitted to reality; however further steps would make it better suited.

Keywords: artificial neural networks, backpropagation, coastal urban zones, erosion prediction

Procedia PDF Downloads 373
2849 Multivariate Analysis on Water Quality Attributes Using Master-Slave Neural Network Model

Authors: A. Clementking, C. Jothi Venkateswaran

Abstract:

Mathematical and computational functionalities such as descriptive mining, optimization, and predictions are espoused to resolve natural resource planning. The water quality prediction and its attributes influence determinations are adopted optimization techniques. The water properties are tainted while merging water resource one with another. This work aimed to predict influencing water resource distribution connectivity in accordance to water quality and sediment using an innovative proposed master-slave neural network back-propagation model. The experiment results are arrived through collecting water quality attributes, computation of water quality index, design and development of neural network model to determine water quality and sediment, master–slave back propagation neural network back-propagation model to determine variations on water quality and sediment attributes between the water resources and the recommendation for connectivity. The homogeneous and parallel biochemical reactions are influences water quality and sediment while distributing water from one location to another. Therefore, an innovative master-slave neural network model [M (9:9:2)::S(9:9:2)] designed and developed to predict the attribute variations. The result of training dataset given as an input to master model and its maximum weights are assigned as an input to the slave model to predict the water quality. The developed master-slave model is predicted physicochemical attributes weight variations for 85 % to 90% of water quality as a target values.The sediment level variations also predicated from 0.01 to 0.05% of each water quality percentage. The model produced the significant variations on physiochemical attribute weights. According to the predicated experimental weight variation on training data set, effective recommendations are made to connect different resources.

Keywords: master-slave back propagation neural network model(MSBPNNM), water quality analysis, multivariate analysis, environmental mining

Procedia PDF Downloads 460
2848 Grey Relational Analysis Coupled with Taguchi Method for Process Parameter Optimization of Friction Stir Welding on 6061 AA

Authors: Eyob Messele Sefene, Atinkut Atinafu Yilma

Abstract:

The highest strength-to-weight ratio criterion has fascinated increasing curiosity in virtually all areas where weight reduction is indispensable. One of the recent advances in manufacturing to achieve this intention endears friction stir welding (FSW). The process is widely used for joining similar and dissimilar non-ferrous materials. In FSW, the mechanical properties of the weld joints are impelled by property-selected process parameters. This paper presents verdicts of optimum process parameters in attempting to attain enhanced mechanical properties of the weld joint. The experiment was conducted on a 5 mm 6061 aluminum alloy sheet. A butt joint configuration was employed. Process parameters, rotational speed, traverse speed or feed rate, axial force, dwell time, tool material and tool profiles were utilized. Process parameters were also optimized, making use of a mixed L18 orthogonal array and the Grey relation analysis method with larger is better quality characteristics. The mechanical properties of the weld joint are examined through the tensile test, hardness test and liquid penetrant test at ambient temperature. ANOVA was conducted in order to investigate the significant process parameters. This research shows that dwell time, rotational speed, tool shape, and traverse speed have become significant, with a joint efficiency of about 82.58%. Nine confirmatory tests are conducted, and the results indicate that the average values of the grey relational grade fall within the 99% confidence interval. Hence the experiment is proven reliable.

Keywords: friction stir welding, optimization, 6061 AA, Taguchi

Procedia PDF Downloads 78
2847 Molecular Modeling of Structurally Diverse Compounds as Potential Therapeutics for Transmissible Spongiform Encephalopathy

Authors: Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević, Lidija R. Jevrić

Abstract:

Prion is a protein substance whose certain form is considered as infectious agent. It is presumed to be the cause of the transmissible spongiform encephalopathies (TSEs). The protein it is composed of, called PrP, can fold in structurally distinct ways. At least one of those 3D structures is transmissible to other prion proteins. Prions can be found in brain tissue of healthy people and have certain biological role. The structure of prions naturally occurring in healthy organisms is marked as PrPc, and the structure of infectious prion is labeled as PrPSc. PrPc may play a role in synaptic plasticity and neuronal development. Also, it may be required for neuronal myelin sheath maintenance, including a role in iron uptake and iron homeostasis. PrPSc can be considered as an environmental pollutant. The main aim of this study was to carry out the molecular modeling and calculation of molecular descriptors (lipophilicity, physico-chemical and topological descriptors) of structurally diverse compounds which can be considered as anti-prion agents. Molecular modeling was conducted applying ChemBio3D Ultra version 12.0 software. The obtained 3D models were subjected to energy minimization using molecular mechanics force field method (MM2). The cutoff for structure optimization was set at a gradient of 0.1 kcal/Åmol. The Austin Model 1 (AM-1) was used for full geometry optimization of all structures. The obtained set of molecular descriptors is applied in analysis of similarities and dissimilarities among the tested compounds. This study is an important step in further development of quantitative structure-activity relationship (QSAR) models, which can be used for prediction of anti-prion activity of newly synthesized compounds.

Keywords: chemometrics, molecular modeling, molecular descriptors, prions, QSAR

Procedia PDF Downloads 312
2846 3D Numerical Studies and Design Optimization of a Swallowtail Butterfly with Twin Tail

Authors: Arunkumar Balamurugan, G. Soundharya Lakshmi, V. Thenmozhi, M. Jegannath, V. R. Sanal Kumar

Abstract:

Aerodynamics of insects is of topical interest in aeronautical industries due to its wide applications on various types of Micro Air Vehicles (MAVs). Note that the MAVs are having smaller geometric dimensions operate at significantly lower speeds on the order of 10 m/s and their Reynolds numbers range is approximately 1,50,000 or lower. In this paper, numerical study has been carried out to capture the flow physics of a biological inspired Swallowtail Butterfly with fixed wing having twin tail at a flight speed of 10 m/s. Comprehensive numerical simulations have been carried out on swallow butterfly with twin tail flying at a speed of 10 m/s with uniform upper and lower angles of attack in both lateral and longitudinal position for identifying the best wing orientation with better aerodynamic efficiency. Grid system in the computational domain is selected after a detailed grid refinement exercises. Parametric analytical studies have been carried out with different lateral and longitudinal angles of attack for finding the better aerodynamic efficiency at the same flight speed. The results reveal that lift coefficient significantly increases with marginal changes in the longitudinal angle and vice versa. But in the case of drag coefficient the conventional changes have been noticed, viz., drag increases at high longitudinal angles. We observed that the change of twin tail section has a significant impact on the formation of vortices and aerodynamic efficiency of the MAV’s. We concluded that for every lateral angle there is an exact longitudinal orientation for the existence of an aerodynamically efficient flying condition of any MAV. This numerical study is a pointer towards for the design optimization of Twin tail MAVs with flapping wings.

Keywords: aerodynamics of insects, MAV, swallowtail butterfly, twin tail MAV design

Procedia PDF Downloads 382
2845 Robust Design of Electroosmosis Driven Self-Circulating Micromixer for Biological Applications

Authors: Bahram Talebjedi, Emily Earl, Mina Hoorfar

Abstract:

One of the issues that arises with microscale lab-on-a-chip technology is that the laminar flow within the microchannels limits the mixing of fluids. To combat this, micromixers have been introduced as a means to try and incorporate turbulence into the flow to better aid the mixing process. This study presents an electroosmotic micromixer that balances vortex generation and degeneration with the inlet flow velocity to greatly increase the mixing efficiency. A comprehensive parametric study was performed to evaluate the role of the relevant parameters on the mixing efficiency. It was observed that the suggested micromixer is perfectly suited for biological applications due to its low pressure drop (below 10 Pa) and low shear rate. The proposed micromixer with optimized working parameters is able to attain a mixing efficiency of 95% in a span of 0.5 seconds using a frequency of 10 Hz, a voltage of 0.7 V, and an inlet velocity of 0.366 mm/s.

Keywords: microfluidics, active mixer, pulsed AC electroosmosis flow, micromixer

Procedia PDF Downloads 118
2844 Characteristic Study on Conventional and Soliton Based Transmission System

Authors: Bhupeshwaran Mani, S. Radha, A. Jawahar, A. Sivasubramanian

Abstract:

Here, we study the characteristic feature of conventional (ON-OFF keying) and soliton based transmission system. We consider 20 Gbps transmission system implemented with Conventional Single Mode Fiber (C-SMF) to examine the role of Gaussian pulse which is the characteristic of conventional propagation and hyperbolic-secant pulse which is the characteristic of soliton propagation in it. We note the influence of these pulses with respect to different dispersion lengths and soliton period in conventional and soliton system, respectively, and evaluate the system performance in terms of quality factor. From the analysis, we could prove that the soliton pulse has more consistent performance even for long distance without dispersion compensation than the conventional system as it is robust to dispersion. For the length of transmission of 200 Km, soliton system yielded Q of 33.958 while the conventional system totally exhausted with Q=0.

Keywords: dispersion length, retrun-to-zero (rz), soliton, soliton period, q-factor

Procedia PDF Downloads 331
2843 Comparative Analysis between Wired and Wireless Technologies in Communications: A Review

Authors: Jafaru Ibrahim, Tonga Agadi Danladi, Haruna Sani

Abstract:

Many telecommunications industry are looking for new ways to maximize their investment in communication networks while ensuring reliable and secure information transmission. There is a variety of communications medium solutions, the two must popularly in used are wireless technology and wired options, such as copper and fiber-optic cable. Wired network has proven its potential in the olden days but nowadays wireless communication has emerged as a robust and most intellect and preferred communication technique. Each of these types of communication medium has their advantages and disadvantages according to its technological characteristics. Wired and wireless networking has different hardware requirements, ranges, mobility, reliability and benefits. The aim of the paper is to compare both the Wired and Wireless medium on the basis of various parameters such as usability, cost, efficiency, flexibility, coverage, reliability, mobility, speed, security etc.

Keywords: cost, mobility, reliability, speed, security, wired, wireless

Procedia PDF Downloads 452
2842 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography

Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner

Abstract:

Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.

Keywords: CBCT, C-arm, reconstruction, trajectory optimization

Procedia PDF Downloads 124
2841 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects

Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town

Abstract:

The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.

Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry

Procedia PDF Downloads 74
2840 Towards a Non-Cohesive Self Metamodernist Literature as Case Study

Authors: Ali Oublal

Abstract:

If any period in history seems appropriate for the study of identity, it is a period of greater mobility; the 21st century. Margaret Wetherill (2009) is thus right while asking who we can be in this age. New biographies of people, their trajectories and new locations appear on the ground; how people do make sense of the self becomes the central question not only for social scientists, and cultural theorists but also for literary critics. New-fangled technologies have resulted in the substitution of stable identities by multiple, fragmented and more uncertain identities. A liquid sense of the self as well as unstable and dynamic forms of life does not fail to inspire novelists who have given robust sense of identities attributed to their characters. The following account comes to snapshot features of identity as being presented by meta-modernist novels: the sympathizer, sisters and a girl is a half formed thing. It is a stance that refutes the claim of Elliott‘s who still adheres the stable state of identity in meta-modernist age while reconciling the two paradigms modernity and postmodernity.

Keywords: identity, metamodernism, fragmantation, stability, literature

Procedia PDF Downloads 82
2839 A User Interface for Easiest Way Image Encryption with Chaos

Authors: D. López-Mancilla, J. M. Roblero-Villa

Abstract:

Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.

Keywords: image encryption, chaos, secure communications, user interface

Procedia PDF Downloads 473
2838 Experimental Optimization in Diamond Lapping of Plasma Sprayed Ceramic Coatings

Authors: S. Gowri, K. Narayanasamy, R. Krishnamurthy

Abstract:

Plasma spraying, from the point of value engineering, is considered as a cost-effective technique to deposit high performance ceramic coatings on ferrous substrates for use in the aero,automobile,electronics and semiconductor industries. High-performance ceramics such as Alumina, Zirconia, and titania-based ceramics have become a key part of turbine blades,automotive cylinder liners,microelectronic and semiconductor components due to their ability to insulate and distribute heat. However, as the industries continue to advance, improved methods are needed to increase both the flexibility and speed of ceramic processing in these applications. The ceramics mentioned were individually coated on structural steel substrate with NiCr bond coat of 50-70 micron thickness with the final thickness in the range of 150 to 200 microns. Optimal spray parameters were selected based on bond strength and porosity. The 'optimal' processed specimens were super finished by lapping using diamond and green SiC abrasives. Interesting results could be observed as follows: The green SiC could improve the surface finish of lapped surfaces almost as that by diamond in case of alumina and titania based ceramics but the diamond abrasives could improve the surface finish of PSZ better than that by green SiC. The conventional random scratches could be absent in alumina and titania ceramics but in PS those marks were found to be less. However, the flatness accuracy could be improved unto 60 to 85%. The surface finish and geometrical accuracy were measured and modeled. The abrasives in the midrange of their particle size could improve the surface quality faster and better than the particles of size in low and high ranges. From the experimental investigations after lapping process, the optimal lapping time, abrasive size, lapping pressure etc could be evaluated.

Keywords: atmospheric plasma spraying, ceramics, lapping, surface qulaity, optimization

Procedia PDF Downloads 403
2837 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate

Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim

Abstract:

Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.

Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic

Procedia PDF Downloads 624
2836 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 120