Search results for: energy performance gap
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18887

Search results for: energy performance gap

10397 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 59
10396 Evidence on the Nature and Extent of Fall in Oil Prices on the Financial Performance of Listed Companies: A Ratio Analysis Case Study of the Insurance Sector in the UAE

Authors: Pallavi Kishore, Mariam Aslam

Abstract:

The sharp decline in oil prices that started in 2014 affected most economies in the world either positively or negatively. In some economies, particularly the oil exporting countries, the effects were felt immediately. The Gulf Cooperation Council’s (GCC henceforth) countries are oil and gas-dependent with the largest oil reserves in the world. UAE (United Arab Emirates) has been striving to diversify away from oil and expects higher non-oil growth in 2018. These two factors, falling oil prices and the economy strategizing away from oil dependence, make a compelling case to study the financial performance of various sectors in the economy. Among other sectors, the insurance sector is widely recognized as an important indicator of the health of the economy. An expanding population, surge in construction and infrastructure, increased life expectancy, greater expenditure on automobiles and other luxury goods translate to a booming insurance sector. A slow-down of the insurance sector, on the other hand, may indicate a general slow-down in the economy. Therefore, a study on the insurance sector will help understand the general nature of the current economy. This study involves calculations and comparisons of ratios pre and post the fall in oil prices in the insurance sector in the UAE. A sample of 33 companies listed on the official stock exchanges of UAE-Dubai Financial Market and Abu Dhabi Stock Exchange were collected and empirical analysis employed to study the financial performance pre and post fall in oil prices. Ratios were calculated in 5 categories: Profitability, Liquidity, Leverage, Efficiency, and Investment. The means pre- and post-fall are compared to conclude that the profitability ratios including ROSF (Return on Shareholder Funds), ROCE (Return on Capital Employed) and NPM (Net Profit Margin) have all taken a hit. Parametric tests, including paired t-test, concludes that while the fall in profitability ratios is statistically significant, the other ratios have been quite stable in the period. The efficiency, liquidity, gearing and investment ratios have not been severely affected by the fall in oil prices. This may be due to the implementation of stronger regulatory policies and is a testimony to the diversification into the non-oil economy. The regulatory authorities can use the findings of this study to ensure transparency in revealing financial information to the public and employ policies that will help further the health of the economy. The study will also help understand which areas within the sector could benefit from more regulations.

Keywords: UAE, insurance sector, ratio analysis, oil price, profitability, liquidity, gearing, investment, efficiency

Procedia PDF Downloads 240
10395 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 262
10394 Exploring the Contribution of Dynamic Capabilities to a Firm's Value Creation: The Role of Competitive Strategy

Authors: Mona Rashidirad, Hamid Salimian

Abstract:

Dynamic capabilities, as the most considerable capabilities of firms in the current fast-moving economy may not be sufficient for performance improvement, but their contribution to performance is undeniable. While much of the extant literature investigates the impact of dynamic capabilities on organisational performance, little attention has been devoted to understand whether and how dynamic capabilities create value. Dynamic capabilities as the mirror of competitive strategies should enable firms to search and seize new ideas, integrate and coordinate the firm’s resources and capabilities in order to create value. A careful investigation to the existing knowledge base remains us puzzled regarding the relationship among competitive strategies, dynamic capabilities and value creation. This study thus attempts to fill in this gap by empirically investigating the impact of dynamic capabilities on value creation and the mediating impact of competitive strategy on this relationship. We aim to contribute to dynamic capability view (DCV), in both theoretical and empirical senses, by exploring the impact of dynamic capabilities on firms’ value creation and whether competitive strategy can play any role in strengthening/weakening this relationship. Using a sample of 491 firms in the UK telecommunications market, the results demonstrate that dynamic sensing, learning, integrating and coordinating capabilities play a significant role in firm’s value creation, and competitive strategy mediates the impact of dynamic capabilities on value creation. Adopting DCV, this study investigates whether the value generating from dynamic capabilities depends on firms’ competitive strategy. This study argues a firm’s competitive strategy can mediate its ability to derive value from its dynamic capabilities and it explains the extent a firm’s competitive strategy may influence its value generation. The results of the dynamic capabilities-value relationships support our expectations and justify the non-financial value added of the four dynamic capability processes in a highly turbulent market, such as UK telecommunications. Our analytical findings of the relationship among dynamic capabilities, competitive strategy and value creation provide further evidence of the undeniable role of competitive strategy in deriving value from dynamic capabilities. The results reinforce the argument for the need to consider the mediating impact of organisational contextual factors, such as firm’s competitive strategy to examine how they interact with dynamic capabilities to deliver value. The findings of this study provide significant contributions to theory. Unlike some previous studies which conceptualise dynamic capabilities as a unidimensional construct, this study demonstrates the benefits of understanding the details of the link among the four types of dynamic capabilities, competitive strategy and value creation. In terms of contributions to managerial practices, this research draws attention to the importance of competitive strategy in conjunction with development and deployment of dynamic capabilities to create value. Managers are now equipped with solid empirical evidence which explains why DCV has become essential to firms in today’s business world.

Keywords: dynamic capabilities, resource based theory, value creation, competitive strategy

Procedia PDF Downloads 238
10393 Study of Clutch Cable Architecture and Its Influence in Efficiency of Mechanical Cable Release System

Authors: M. Devamanalan, K. Pothiraj, M. Sudhan

Abstract:

In competitive market like India, there is a high demand on the equal contribution on performance and its durability aspect of any system. In General vehicle has multiple sub-systems such as powertrain, BIW, Brakes, Actuations, Suspension and Seats etc., To withstand the market challenges, the contribution of each sub-system is very vital. The malfunction of any one sub system will directly have an impact on the performance of the major system which lead to dis-satisfaction to the end user. The Powertrain system consists of several sub-systems in which clutch is one of the prime sub-systems in MT vehicles which assist for smoother gear shifts with proper clutch dis-engagement and engagement. In general, most of the vehicles will have a mechanical or semi or full hydraulic clutch release system, whereas in small Commercial Vehicles (SCV) the majorly used clutch release system is mechanical cable release system due to its lesser cost and functional requirements. The major bottle neck in the cable type clutch release system is increase in pedal effort due to hysteresis increase and Gear shifting hard due to efficiency loss / cable slackness over the mileage accumulation of the vehicle. This study is to mainly focus on how the efficiency and hysteresis change over the mileage of the vehicle occurs because of the design architecture of outer and inner cable. The study involves several cable design validation results from vehicle level and rig level through the defined cable routing and test procedures. Results are compared to evaluate the suitable cable design architecture based on better efficiency and lower hysteresis parameters at initial and end of the validation.

Keywords: clutch, clutch cable, efficiency, architecture, cable routing

Procedia PDF Downloads 112
10392 Automated Building Internal Layout Design Incorporating Post-Earthquake Evacuation Considerations

Authors: Sajjad Hassanpour, Vicente A. González, Yang Zou, Jiamou Liu

Abstract:

Earthquakes pose a significant threat to both structural and non-structural elements in buildings, putting human lives at risk. Effective post-earthquake evacuation is critical for ensuring the safety of building occupants. However, current design practices often neglect the integration of post-earthquake evacuation considerations into the early-stage architectural design process. To address this gap, this paper presents a novel automated internal architectural layout generation tool that optimizes post-earthquake evacuation performance. The tool takes an initial plain floor plan as input, along with specific requirements from the user/architect, such as minimum room dimensions, corridor width, and exit lengths. Based on these inputs, firstly, the tool randomly generates different architectural layouts. Secondly, the human post-earthquake evacuation behaviour will be thoroughly assessed for each generated layout using the advanced Agent-Based Building Earthquake Evacuation Simulation (AB2E2S) model. The AB2E2S prototype is a post-earthquake evacuation simulation tool that incorporates variables related to earthquake intensity, architectural layout, and human factors. It leverages a hierarchical agent-based simulation approach, incorporating reinforcement learning to mimic human behaviour during evacuation. The model evaluates different layout options and provides feedback on evacuation flow, time, and possible casualties due to earthquake non-structural damage. By integrating the AB2E2S model into the automated layout generation tool, architects and designers can obtain optimized architectural layouts that prioritize post-earthquake evacuation performance. Through the use of the tool, architects and designers can explore various design alternatives, considering different minimum room requirements, corridor widths, and exit lengths. This approach ensures that evacuation considerations are embedded in the early stages of the design process. In conclusion, this research presents an innovative automated internal architectural layout generation tool that integrates post-earthquake evacuation simulation. By incorporating evacuation considerations into the early-stage design process, architects and designers can optimize building layouts for improved post-earthquake evacuation performance. This tool empowers professionals to create resilient designs that prioritize the safety of building occupants in the face of seismic events.

Keywords: agent-based simulation, automation in design, architectural layout, post-earthquake evacuation behavior

Procedia PDF Downloads 94
10391 High-Performance Thin-layer Chromatography (HPTLC) Analysis of Multi-Ingredient Traditional Chinese Medicine Supplement

Authors: Martin Cai, Khadijah B. Hashim, Leng Leo, Edmund F. Tian

Abstract:

Analysis of traditional Chinese medicinal (TCM) supplements has always been a laborious task, particularly in the case of multi‐ingredient formulations. Traditionally, herbal extracts are analysed using one or few markers compounds. In the recent years, however, pharmaceutical companies are introducing health supplements of TCM active ingredients to cater to the needs of consumers in the fast-paced society in this age. As such, new problems arise in the aspects of composition identification as well as quality analysis. In most cases of products or supplements formulated with multiple TCM herbs, the chemical composition, and nature of each raw material differs greatly from the others in the formulation. This results in a requirement for individual analytical processes in order to identify the marker compounds in the various botanicals. Thin-layer Chromatography (TLC) is a simple, cost effective, yet well-regarded method for the analysis of natural products, both as a Pharmacopeia-approved method for identification and authentication of herbs, and a great analytical tool for the discovery of chemical compositions in herbal extracts. Recent technical advances introduced High-Performance TLC (HPTLC) where, with the help of automated equipment and improvements on the chromatographic materials, both the quality and reproducibility are greatly improved, allowing for highly standardised analysis with greater details. Here we report an industrial consultancy project with ONI Global Pte Ltd for the analysis of LAC Liver Protector, a TCM formulation aimed at improving liver health. The aim of this study was to identify 4 key components of the supplement using HPTLC, following protocols derived from Chinese Pharmacopeia standards. By comparing the TLC profiles of the supplement to the extracts of the herbs reported in the label, this project proposes a simple and cost-effective analysis of the presence of the 4 marker compounds in the multi‐ingredient formulation by using 4 different HPTLC methods. With the increasing trend of small and medium-sized enterprises (SMEs) bringing natural products and health supplements into the market, it is crucial that the qualities of both raw materials and end products be well-assured for the protection of consumers. With the technology of HPTLC, science can be incorporated to help SMEs with their quality control, thereby ensuring product quality.

Keywords: traditional Chinese medicine supplement, high performance thin layer chromatography, active ingredients, product quality

Procedia PDF Downloads 274
10390 Energy Loss Reduction in Oil Refineries through Flare Gas Recovery Approaches

Authors: Majid Amidpour, Parisa Karimi, Marzieh Joda

Abstract:

For the last few years, release of burned undesirable by-products has become a challenging issue in oil industries. Flaring, as one of the main sources of air contamination, involves detrimental and long-lasting effects on human health and is considered a substantial reason for energy losses worldwide. This research involves studying the implications of two main flare gas recovery methods at three oil refineries, all in Iran as the case I, case II, and case III in which the production capacities are increasing respectively. In the proposed methods, flare gases are converted into more valuable products, before combustion by the flare networks. The first approach involves collecting, compressing and converting the flare gas to smokeless fuel which can be used in the fuel gas system of the refineries. The other scenario includes utilizing the flare gas as a feed into liquefied petroleum gas (LPG) production unit already established in the refineries. The processes of these scenarios are simulated, and the capital investment is calculated for each procedure. The cumulative profits of the scenarios are evaluated using Net Present Value method. Furthermore, the sensitivity analysis based on total propane and butane mole fraction is carried out to make a rational comparison for LPG production approach, and the results are illustrated for different mole fractions of propane and butane. As the mole fraction of propane and butane contained in LPG differs in summer and winter seasons, the results corresponding to LPG scenario are demonstrated for each season. The results of the simulations show that cumulative profit in fuel gas production scenario and LPG production rate increase with the capacity of the refineries. Moreover, the investment return time in LPG production method experiences a decline, followed by a rising trend with an increase in C3 and C4 content. The minimum value of time return occurs at propane and butane sum concentration values of 0.7, 0.6, and 0.7 in case I, II, and III, respectively. Based on comparison of the time of investment return and cumulative profit, fuel gas production is the superior scenario for three case studies.

Keywords: flare gas reduction, liquefied petroleum gas, fuel gas, net present value method, sensitivity analysis

Procedia PDF Downloads 151
10389 A Holistic Approach for Technical Product Optimization

Authors: Harald Lang, Michael Bader, A. Buchroithner

Abstract:

Holistic methods covering the development process as a whole – e.g. systems engineering – have established themselves in product design. However, technical product optimization, representing improvements in efficiency and/or minimization of loss, usually applies to single components of a system. A holistic approach is being defined based on a hierarchical point of view of systems engineering. This is subsequently presented using the example of an electromechanical flywheel energy storage system for automotive applications.

Keywords: design, product development, product optimization, systems engineering

Procedia PDF Downloads 620
10388 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 162
10387 Biosorption Kinetics, Isotherms, and Thermodynamic Studies of Copper (II) on Spirogyra sp.

Authors: Diwan Singh

Abstract:

The ability of non-living Spirogyra sp. biomass for biosorption of copper(II) ions from aqueous solutions was explored. The effect of contact time, pH, initial copper ion concentration, biosorbent dosage and temperature were investigated in batch experiments. Both the Freundlich and Langmuir Isotherms were found applicable on the experimental data (R2>0.98). Qmax obtained from the Langmuir Isotherms was found to be 28.7 mg/g of biomass. The values of Gibbs free energy (ΔGº) and enthalpy change (ΔHº) suggest that the sorption is spontaneous and endothermic at 20ºC-40ºC.

Keywords: biosorption, Spirogyra sp., contact time, pH, dose

Procedia PDF Downloads 418
10386 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 106
10385 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 214
10384 Empirical Modeling and Optimization of Laser Welding of AISI 304 Stainless Steel

Authors: Nikhil Kumar, Asish Bandyopadhyay

Abstract:

Laser welding process is a capable technology for forming the automobile, microelectronics, marine and aerospace parts etc. In the present work, a mathematical and statistical approach is adopted to study the laser welding of AISI 304 stainless steel. A robotic control 500 W pulsed Nd:YAG laser source with 1064 nm wavelength has been used for welding purpose. Butt joints are made. The effects of welding parameters, namely; laser power, scanning speed and pulse width on the seam width and depth of penetration has been investigated using the empirical models developed by response surface methodology (RSM). Weld quality is directly correlated with the weld geometry. Twenty sets of experiments have been conducted as per central composite design (CCD) design matrix. The second order mathematical model has been developed for predicting the desired responses. The results of ANOVA indicate that the laser power has the most significant effect on responses. Microstructural analysis as well as hardness of the selected weld specimens has been carried out to understand the metallurgical and mechanical behaviour of the weld. Average micro-hardness of the weld is observed to be higher than the base metal. Higher hardness of the weld is the resultant of grain refinement and δ-ferrite formation in the weld structure. The result suggests that the lower line energy generally produce fine grain structure and improved mechanical properties than the high line energy. The combined effects of input parameters on responses have been analyzed with the help of developed 3-D response surface and contour plots. Finally, multi-objective optimization has been conducted for producing weld joint with complete penetration, minimum seam width and acceptable welding profile. Confirmatory tests have been conducted at optimum parametric conditions to validate the applied optimization technique.

Keywords: ANOVA, laser welding, modeling and optimization, response surface methodology

Procedia PDF Downloads 291
10383 Avoiding Gas Hydrate Problems in Qatar Oil and Gas Industry: Environmentally Friendly Solvents for Gas Hydrate Inhibition

Authors: Nabila Mohamed, Santiago Aparicio, Bahman Tohidi, Mert Atilhan

Abstract:

Qatar's one of the biggest problem in processing its natural resource, which is natural gas, is the often occurring blockage in the pipelines caused due to uncontrolled gas hydrate formation in the pipelines. Several millions of dollars are being spent at the process site to dehydrate the blockage safely by using chemical inhibitors. We aim to establish national database, which addresses the physical conditions that promotes Qatari natural gas to form gas hydrates in the pipelines. Moreover, we aim to design and test novel hydrate inhibitors that are suitable for Qatari natural gas and its processing facilities. From these perspectives we are aiming to provide more effective and sustainable reservoir utilization and processing of Qatari natural gas. In this work, we present the initial findings of a QNRF funded project, which deals with the natural gas hydrate formation characteristics of Qatari type gas in both experimental (PVTx) and computational (molecular simulations) methods. We present the data from the two fully automated apparatus: a gas hydrate autoclave and a rocking cell. Hydrate equilibrium curves including growth/dissociation conditions for multi-component systems for several gas mixtures that represent Qatari type natural gas with and without the presence of well known kinetic and thermodynamic hydrate inhibitors. Ionic liquids were designed and used for testing their inhibition performance and their DFT and molecular modeling simulation results were also obtained and compared with the experimental results. Results showed significant performance of ionic liquids with up to 0.5 % in volume with up to 2 to 4 0C inhibition at high pressures.

Keywords: gas hydrates, natural gas, ionic liquids, inhibition, thermodynamic inhibitors, kinetic inhibitors

Procedia PDF Downloads 1310
10382 Kinetics Study for the Recombinant Cellulosome to the Degradation of Chlorella Cell Residuals

Authors: C. C. Lin, S. C. Kan, C. W. Yeh, C. I Chen, C. J. Shieh, Y. C. Liu

Abstract:

In this study, lipid-deprived residuals of microalgae were hydrolyzed for the production of reducing sugars by using the recombinant Bacillus cellulosome, carrying eight genes from the Clostridium thermocellum ATCC27405. The obtained cellulosome was found to exist mostly in the broth supernatant with a cellulosome activity of 2.4 U/mL. Furthermore, the Michaelis-Menten constant (Km) and Vmax of cellulosome were found to be 14.832 g/L and 3.522 U/mL. The activation energy of the cellulosome to hydrolyze microalgae LDRs was calculated as 32.804 kJ/mol.

Keywords: lipid-deprived residuals of microalgae, cellulosome, cellulose, reducing sugars, kinetics

Procedia PDF Downloads 397
10381 Investigation on the Effect of Titanium (Ti) Plus Boron (B) Addition to the Mg-AZ31 Alloy in the as Cast and After Extrusion on Its Metallurgical and Mechanical Characteristics

Authors: Adnan I. O. Zaid, Raghad S. Hemeimat

Abstract:

Magnesium - aluminum alloys are versatile materials which are used in manufacturing a number of engineering and industrial parts in the automobile and aircraft industries due to their strength – to –weight -ratio. Against these preferable characteristics, magnesium is difficult to deform at room temperature therefore it is alloyed with other elements mainly Aluminum and Zinc to add some required properties particularly for their high strength - to -weight ratio. Mg and its alloys oxidize rapidly therefore care should be taken during melting or machining them; but they are not fire hazardous. Grain refinement is an important technology to improve the mechanical properties and the micro structure uniformity of the alloys. Grain refinement has been introduced in early fifties; when Cibula showed that the presence of Ti, and Ti+ B, produced a great refining effect in Al. since then it became an industrial practice to grain refine Al. Most of the published work on grain refinement was directed toward grain refining Al and Zinc alloys; however, the effect of the addition of rare earth material on the grain size or the mechanical behavior of Mg alloys has not been previously investigated. This forms the main objective of the research work; where, the effect of Ti addition on the grain size, mechanical behavior, ductility, and the extrusion force & energy consumed in forward extrusion of Mg-AZ31 alloy is investigated and discussed in two conditions, first in the as cast condition and the second after extrusion. It was found that addition of Ti to Mg- AZ31 alloy has resulted in reduction of its grain size by 14%; the reduction in grain size after extrusion was much higher. However the increase in Vicker’s hardness was 3% after the addition of Ti in the as cast condition, and higher values for Vicker’s hardness were achieved after extrusion. Furthermore, an increase in the strength coefficient by 36% was achieved with the addition of Ti to Mg-AZ31 alloy in the as cast condition. Similarly, the work hardening index was also increased indicating an enhancement of the ductility and formability. As for the extrusion process, it was found that the force and energy required for the extrusion were both reduced by 57% and 59% with the addition of Ti.

Keywords: cast condition, direct extrusion, ductility, MgAZ31 alloy, super - plasticity

Procedia PDF Downloads 452
10380 Design and Development of Bioactive a-Hydroxy Carboxylate Group Modified MnFe₂O₄ Nanoparticle: Comparative Fluorescence Study, Magnetism and DNA Nuclease Activity

Authors: Indranil Chakraborty, Kalyan Mandal

Abstract:

Three new α-hydroxy carboxylate group functionalized MnFe₂O₄ nanoparticles (NPs) have been developed to explore the microscopic origin of ligand modified fluorescence and magnetic properties of nearly monodispersed MnFe₂O₄ NPs. The surface functionalization has been carried out with three small organic ligands (tartrate, malate, and citrate) having different number of α-hydroxy carboxylate functional group along with steric effect. Detailed study unveils that α-hydroxy carboxylate moiety of the ligands plays key role to generate intrinsic fluorescence in functionalized MnFe₂O₄ NPs through the activation of ligand to metal charge transfer transitions, associated with ligand-Mn²⁺/Fe³⁺ interactions along with d-d transition corresponding to d-orbital energy level splitting of Fe³⁺ ions on NP surface. Further, MnFe₂O₄ NPs show a maximum 140.88% increase in coercivity and 97.95% decrease in magnetization compared to its bare one upon functionalization. The ligands that induce smallest crystal field splitting of d-orbital energy level of transition metal ions are found to result in strongest ferromagnetic activation of the NPs. Finally, our developed tartrate functionalized MnFe₂O₄ (T-MnFe₂O₄) NPs have been utilized for studying DNA binding interaction and nuclease activity for stimulating their beneficial activities toward diverse biomedical applications. The spectroscopic measurements indicate that T-MnFe₂O₄ NPs bind calf thymus DNA by intercalative mode. The ability of T-MnFe₂O₄ NPs to induce DNA cleavage was studied by gel electrophoresis technique where the complex is found to promote the cleavage of pBR322 plasmid DNA from the super coiled form I to linear coiled form II and nicked coiled form III with good efficiency. This may be taken into account for designing new biomolecular detection agents and anti-cancer drug which can open up a new door toward diverse non-invasive biomedical applications.

Keywords: MnFe₂O₄ nanoparticle, α-hydroxy carboxylic acid, comparative fluorescence, magnetism study, DNA interaction, nuclease activity

Procedia PDF Downloads 132
10379 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection

Authors: Devadrita Dey Sarkar

Abstract:

Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.

Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)

Procedia PDF Downloads 453
10378 Discriminant Analysis of Pacing Behavior on Mass Start Speed Skating

Authors: Feng Li, Qian Peng

Abstract:

The mass start speed skating (MSSS) is a new event for the 2018 PyeongChang Winter Olympics and will be an official race for the 2022 Beijing Winter Olympics. Considering that the event rankings were based on points gained on laps, it is worthwhile to investigate the pacing behavior on each lap that directly influences the ranking of the race. The aim of this study was to detect the pacing behavior and performance on MSSS regarding skaters’ level (SL), competition stage (semi-final/final) (CS) and gender (G). All the men's and women's races in the World Cup and World Championships were analyzed in the 2018-2019 and 2019-2020 seasons. As a result, a total of 601 skaters from 36 games were observed. ANOVA for repeated measures was applied to compare the pacing behavior on each lap, and the three-way ANOVA for repeated measures was used to identify the influence of SL, CS, and G on pacing behavior and total time spent. In general, the results showed that the pacing behavior from fast to slow were cluster 1—laps 4, 8, 12, 15, 16, cluster 2—laps 5, 9, 13, 14, cluster 3—laps 3, 6, 7, 10, 11, and cluster 4—laps 1 and 2 (p=0.000). For CS, the total time spent in the final was less than the semi-final (p=0.000). For SL, top-level skaters spent less total time than the middle-level and low-level (p≤0.002), while there was no significant difference between the middle-level and low-level (p=0.214). For G, the men’s skaters spent less total time than women on all laps (p≤0.048). This study could help to coach staff better understand the pacing behavior regarding SL, CS, and G, further providing references concerning promoting the pacing strategy and decision making before and during the race.

Keywords: performance analysis, pacing strategy, winning strategy, winter Olympics

Procedia PDF Downloads 188
10377 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 353
10376 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 129
10375 Bracing Applications for Improving the Earthquake Performance of Reinforced Concrete Structures

Authors: Diyar Yousif Ali

Abstract:

Braced frames, besides other structural systems, such as shear walls or moment resisting frames, have been a valuable and effective technique to increase structures against seismic loads. In wind or seismic excitations, diagonal members react as truss web elements which would afford tension or compression stresses. This study proposes to consider the effect of bracing diagonal configuration on values of base shear and displacement of building. Two models were created, and nonlinear pushover analysis was implemented. Results show that bracing members enhance the lateral load performance of the Concentric Braced Frame (CBF) considerably. The purpose of this article is to study the nonlinear response of reinforced concrete structures which contain hollow pipe steel braces as the major structural elements against earthquake loads. A five-storey reinforced concrete structure was selected in this study; two different reinforced concrete frames were considered. The first system was an un-braced frame, while the last one was a braced frame with diagonal bracing. Analytical modelings of the bare frame and braced frame were realized by means of SAP 2000. The performances of all structures were evaluated using nonlinear static analyses. From these analyses, the base shear and displacements were compared. Results are plotted in diagrams and discussed extensively, and the results of the analyses showed that the braced frame was seemed to capable of more lateral load carrying and had a high value for stiffness and lower roof displacement in comparison with the bare frame.

Keywords: reinforced concrete structures, pushover analysis, base shear, steel bracing

Procedia PDF Downloads 86
10374 Wireless Backhauling for 5G Small Cell Networks

Authors: Abdullah A. Al Orainy

Abstract:

Small cell backhaul solutions need to be cost-effective, scalable, and easy to install. This paper presents an overview of small cell backhaul technologies. Wireless solutions including TV white space, satellite, sub-6 GHz radio wave, microwave and mmWave with their backhaul characteristics are discussed. Recent research on issues like beamforming, backhaul architecture, precoding and large antenna arrays, and energy efficiency for dense small cell backhaul with mmWave communications is reviewed. Recent trials of 5G technologies are summarized.

Keywords: backhaul, small cells, wireless, 5G

Procedia PDF Downloads 499
10373 Physical and Physiological Characteristics of Young Soccer Players in Republic of Macedonia

Authors: Sanja Manchevska, Vaska Antevska, Lidija Todorovska, Beti Dejanova, Sunchica Petrovska, Ivanka Karagjozova, Elizabeta Sivevska, Jasmina Pluncevic Gligoroska

Abstract:

Introduction: A number of positive effects on the player’s physical status, including the body mass components are attributed to training process. As young soccer players grow up qualitative and quantitative changes appear and contribute to better performance. Player’s anthropometric and physiologic characteristics are recognized as important determinants of performance. Material: A sample of 52 soccer players with an age span from 9 to 14 years were divided in two groups differentiated by age. The younger group consisted of 25 boys under 11 years (mean age 10.2) and second group consisted of 27 boys with mean age 12.64. Method: The set of basic anthropometric parameters was analyzed: height, weight, BMI (Body Mass Index) and body mass components. Maximal oxygen uptake was tested using the treadmill protocol by Brus. Results: The group aged under 11 years showed the following anthropometric and physiological features: average height= 143.39cm, average weight= 44.27 kg; BMI= 18.77; Err = 5.04; Hb= 13.78 g/l; VO2=37.72 mlO2/kg. Average values of analyzed parameters were as follows: height was 163.7 cm; weight= 56.3 kg; BMI = 19.6; VO2= 39.52 ml/kg; Err=5.01; Hb=14.3g/l for the participants aged 12 to14 years. Conclusion: Physiological parameters (maximal oxygen uptake, erythrocytes and Hb) were insignificantly higher in the older group compared to the younger group. There were no statistically significant differences between analyzed anthropometric parameters among the two groups except for the basic measurements (height and weight).

Keywords: body composition, young soccer players, BMI, physical status

Procedia PDF Downloads 394
10372 Improving the Safety Performance of Workers by Assessing the Impact of Safety Culture on Workers’ Safety Behaviour in Nigeria Oil and Gas Industry: A Pilot Study in the Niger Delta Region

Authors: Efua Ehiaguina, Haruna Moda

Abstract:

Interest in the development of appropriate safety culture in the oil and gas industry has taken centre stage among stakeholders in the industry. Human behaviour has been identified as a major contributor to occupational accidents, where abnormal activities associated with safety management are taken as normal behaviour. Poor safety culture is one of the major factors that influence employee’s safety behaviour at work, which may consequently result in injuries and accidents and strengthening such a culture can improve workers safety performance. Nigeria oil and gas industry has contributed to the growth and development of the country in diverse ways. However, in terms of safety and health of workers, this industry is a dangerous place to work as workers are often exposed to occupational safety and health hazard. To ascertain the impact of employees’ safety and how it impacts health and safety compliance within the local industry, online safety culture survey targeting frontline workers within the industry was administered covering major subjects that include; perception of management commitment and style of leadership; safety communication method and its resultant impact on employees’ behaviour; employee safety commitment and training needs. The preliminary result revealed that 54% of the participants feel that there is a lack of motivation from the management to work safely. In addition, 55% of participants revealed that employers place more emphasis on work delivery over employee’s safety on the installation. It is expected that the study outcome will provide measures aimed at strengthening and sustaining safety culture in the Nigerian oil and gas industry.

Keywords: oil and gas safety, safety behaviour, safety culture, safety compliance

Procedia PDF Downloads 133
10371 The Feasibility of Anaerobic Digestion at 45⁰C

Authors: Nuruol S. Mohd, Safia Ahmed, Rumana Riffat, Baoqiang Li

Abstract:

Anaerobic digestion at mesophilic and thermophilic temperatures have been widely studied and evaluated by numerous researchers. Limited extensive research has been conducted on anaerobic digestion in the intermediate zone of 45°C, mainly due to the notion that limited microbial activity occurs within this zone. The objectives of this research were to evaluate the performance and the capability of anaerobic digestion at 45°C in producing class A biosolids, in comparison to a mesophilic and thermophilic anaerobic digestion system operated at 35°C and 55°C, respectively. In addition to that, the investigation on the possible inhibition factors affecting the performance of the digestion system at this temperature will be conducted as well. The 45°C anaerobic digestion systems were not able to achieve comparable methane yield and high-quality effluent compared to the mesophilic system, even though the systems produced biogas with about 62-67% methane. The 45°C digesters suffered from high acetate accumulation, but sufficient buffering capacity was observed as the pH, alkalinity and volatile fatty acids (VFA)-to-alkalinity ratio were within recommended values. The accumulation of acetate observed in 45°C systems were presumably due to the high temperature which contributed to high hydrolysis rate. Consequently, it produced a large amount of toxic salts that combined with the substrate making them not readily available to be consumed by methanogens. Acetate accumulation, even though contributed to 52 to 71% reduction in acetate degradation process, could not be considered as completely inhibitory. Additionally, at 45°C, no ammonia inhibition was observed and the digesters were able to achieve volatile solids (VS) reduction of 47.94±4.17%. The pathogen counts were less than 1,000 MPN/g total solids, thus, producing Class A biosolids.

Keywords: 45°C anaerobic digestion, acetate accumulation, class A biosolids, salt toxicity

Procedia PDF Downloads 300
10370 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: bridge structures, passive control, seismic, semi-active control, viscous damping

Procedia PDF Downloads 237
10369 The Nexus between Manpower Training and Corporate Compliance

Authors: Timothy Wale Olaosebikan

Abstract:

The most active resource in any organization is the manpower. Every other resource remains inactive unless there is competent manpower to handle them. Manpower training is needed to enhance productivity and overall performance of the organizations. This is due to the recognition of the important role of manpower training in attainment of organizational goals. Corporate Compliance conjures visions of an incomprehensible matrix of laws and regulations that defy logic and control by even the most seasoned manpower training professionals. Similarly, corporate compliance can be viewed as one of the most significant problems faced in manpower training process for any organization, therefore, commands relevant attention and comprehension. Consequently, this study investigated the nexus between manpower training and corporate compliance. Collection of data for the study was effected through the use of questionnaire with a sample size of 265 drawn by stratified random sampling. The data were analyzed using descriptive and inferential statistics. The findings of the study show that about 75% of the respondents agree that there is a strong relationship between manpower training and corporate compliance, which brings out the organizational attainment from any training process. The findings further show that most organisation do not totally comply with the rules guiding manpower training process thereby making the process less effective on organizational performance, which may affect overall profitability. The study concludes that formulation and compliance of adequate rules and guidelines for manpower trainings will produce effective results for both employees and the organization at large. The study recommends that leaders of organizations, industries, and institutions must ensure total compliance on the part of both the employees and the organization to manpower training rules. Organizations and stakeholders should also ensure that strict policies on corporate compliance to manpower trainings form the heart of their cardinal mission.

Keywords: corporate compliance, manpower training, nexus, rules and guidelines

Procedia PDF Downloads 134
10368 Design and Implementation of Low-code Model-building Methods

Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu

Abstract:

This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.

Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment

Procedia PDF Downloads 15