Search results for: structure-dependent integration method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21036

Search results for: structure-dependent integration method

19356 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 463
19355 Strength Analysis of RCC Dams Subject to the Layer-by-Layer Construction Method

Authors: Archil Motsonelidze, Vitaly Dvalishvili

Abstract:

Existing roller compacted concrete (RCC) dams indicate that the layer-by-layer construction method gives considerable economies as compared with the conventional methods. RCC dams have also gained acceptance in the regions of high seismic activity. Earthquake resistance analysis of RCC gravity dams based on nonlinear finite element technique is presented. An elastic-plastic approach is used to describe the material of a dam while it is under static conditions (period of construction). Seismic force, as an acceleration equivalent to that produced by a real earthquake, is supposed to act when the dam is completed. The materials of the dam and foundation may be nonhomogeneous and anisotropic. The “dam-foundation” system is idealized as a plain strain problem.

Keywords: finite element method, layer-by-layer construction, RCC dams, strength analysis

Procedia PDF Downloads 549
19354 Formulation of a Rapid Earthquake Risk Ranking Criteria for National Bridges in the National Capital Region Affected by the West Valley Fault Using GIS Data Integration

Authors: George Mariano Soriano

Abstract:

In this study, a Rapid Earthquake Risk Ranking Criteria was formulated by integrating various existing maps and databases by the Department of Public Works and Highways (DPWH) and Philippine Institute of Volcanology and Seismology (PHIVOLCS). Utilizing Geographic Information System (GIS) software, the above-mentioned maps and databases were used in extracting seismic hazard parameters and bridge vulnerability characteristics in order to rank the seismic damage risk rating of bridges in the National Capital Region.

Keywords: bridge, earthquake, GIS, hazard, risk, vulnerability

Procedia PDF Downloads 409
19353 Investigation on the Properties of Particulate Reinforced AA2014 Metal Matrix Composite Materials Produced by Vacuum Infiltration Method

Authors: Isil Kerti, Onur Okur, Sibel Daglilar, Recep Calin

Abstract:

Particulate reinforced aluminium matrix composites have gained more importance in automotive, aeronautical and defense industries due to their specific properties like as low density, high strength and stiffness, good fatigue strength, dimensional stability at high temperature and acceptable tribological properties. In this study, 2014 Aluminium alloy used as a matrix material and B₄C and SiC were selected as reinforcements components. For production of composites materials, vacuum infiltration method was used. In the experimental studies, the reinforcement volume ratios were defined by mixing as totally 10% B₄C and SiC. Aging treatment (T6) was applied to the specimens. The effect of T6 treatment on hardness was determined by using Brinell hardness test method. The effects of the aging treatment on microstructure and chemical structure were analysed by making XRD, SEM and EDS analysis on the specimens.

Keywords: metal matrix composite, vacumm infiltration method, aluminum metal matrix, mechanical feature

Procedia PDF Downloads 315
19352 Unconventional Calculus Spreadsheet Functions

Authors: Chahid K. Ghaddar

Abstract:

The spreadsheet engine is exploited via a non-conventional mechanism to enable novel worksheet solver functions for computational calculus. The solver functions bypass inherent restrictions on built-in math and user defined functions by taking variable formulas as a new type of argument while retaining purity and recursion properties. The enabling mechanism permits integration of numerical algorithms into worksheet functions for solving virtually any computational problem that can be modelled by formulas and variables. Several examples are presented for computing integrals, derivatives, and systems of deferential-algebraic equations. Incorporation of the worksheet solver functions with the ubiquitous spreadsheet extend the utility of the latter as a powerful tool for computational mathematics.

Keywords: calculus, differential algebraic equations, solvers, spreadsheet

Procedia PDF Downloads 360
19351 Sonochemically Prepared Non-Noble Metal Oxide Catalysts for Methane Catalytic Combustion

Authors: Przemyslaw J. Jodlowski, Roman J. Jedrzejczyk, Damian K. Chlebda, Anna Dziedzicka, Lukasz Kuterasinski, Anna Gancarczyk, Maciej Sitarz

Abstract:

The aim of this study was to obtain highly active catalysts based on non-noble metal oxides supported on zirconia prepared via a sonochemical method. In this study, the influence of the stabilizers addition during the preparation step was checked. The final catalysts were characterized by using such characterization methods as X-ray Diffraction (XRD), nitrogen adsorption, X-ray fluorescence (XRF), scanning electron microscopy (SEM) equipped with energy dispersive X-ray spectrometer (EDS), transmission electron microscopy (TEM) and µRaman. The proposed preparation method allowed to obtain uniformly dispersed metal-oxide nanoparticles at the support’s surface. The catalytic activity of prepared catalyst samples was measured in a methane combustion reaction. The activity of the catalysts prepared by the sonochemical method was considerably higher than their counterparts prepared by the incipient wetness method.

Keywords: methane catalytic combustion, nanoparticles, non-noble metals, sonochemistry

Procedia PDF Downloads 217
19350 New Method for Determining the Distribution of Birefringence and Linear Dichroism in Polymer Materials Based on Polarization-Holographic Grating

Authors: Barbara Kilosanidze, George Kakauridze, Levan Nadareishvili, Yuri Mshvenieradze

Abstract:

A new method for determining the distribution of birefringence and linear dichroism in optical polymer materials is presented. The method is based on the use of polarization-holographic diffraction grating that forms an orthogonal circular basis in the process of diffraction of probing laser beam on the grating. The intensities ratio of the orders of diffraction on this grating enables the value of birefringence and linear dichroism in the sample to be determined. The distribution of birefringence in the sample is determined by scanning with a circularly polarized beam with a wavelength far from the absorption band of the material. If the scanning is carried out by probing beam with the wavelength near to a maximum of the absorption band of the chromophore then the distribution of linear dichroism can be determined. An appropriate theoretical model of this method is presented. A laboratory setup was created for the proposed method. An optical scheme of the laboratory setup is presented. The results of measurement in polymer films with two-dimensional gradient distribution of birefringence and linear dichroism are discussed.

Keywords: birefringence, linear dichroism, graded oriented polymers, optical polymers, optical anisotropy, polarization-holographic grating

Procedia PDF Downloads 432
19349 Estimating Destinations of Bus Passengers Using Smart Card Data

Authors: Hasik Lee, Seung-Young Kho

Abstract:

Nowadays, automatic fare collection (AFC) system is widely used in many countries. However, smart card data from many of cities does not contain alighting information which is necessary to build OD matrices. Therefore, in order to utilize smart card data, destinations of passengers should be estimated. In this paper, kernel density estimation was used to forecast probabilities of alighting stations of bus passengers and applied to smart card data in Seoul, Korea which contains boarding and alighting information. This method was also validated with actual data. In some cases, stochastic method was more accurate than deterministic method. Therefore, it is sufficiently accurate to be used to build OD matrices.

Keywords: destination estimation, Kernel density estimation, smart card data, validation

Procedia PDF Downloads 352
19348 Chemometric Estimation of Phytochemicals Affecting the Antioxidant Potential of Lettuce

Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Aleksandra Tepic-Horecki, Zdravko Sumic

Abstract:

In this paper, the influence of six different phytochemical content (phenols, carotenoids, chlorophyll a, chlorophyll b, chlorophyll a + b and vitamin C) on antioxidant potential of Murai and Levistro lettuce varieties was evaluated. Variable selection was made by generalized pair correlation method (GPCM) as a novel ranking method. This method is used for the discrimination between two variables that almost equal correlate to a dependent variable. Fisher’s conditional exact and McNemar’s test were carried out. Established multiple linear (MLR) models were statistically evaluated. As the best phytochemicals for the antioxidant potential prediction, chlorophyll a, chlorophyll a + b and total carotenoids content stand out. This was confirmed through both GPCM and MLR, predictive ability of obtained MLR can be used for antioxidant potential estimation for similar lettuce samples. This article is based upon work from the project of the Provincial Secretariat for Science and Technological Development of Vojvodina (No. 114-451-347/2015-02).

Keywords: antioxidant activity, generalized pair correlation method, lettuce, regression analysis

Procedia PDF Downloads 387
19347 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 63
19346 Digital Watermarking Using Fractional Transform and (k,n) Halftone Visual Cryptography (HVC)

Authors: R. Rama Kishore, Sunesh Malik

Abstract:

Development in the usage of internet for different purposes in recent times creates great threat for the copy right protection of the digital images. Digital watermarking is the best way to rescue from the said problem. This paper presents detailed review of the different watermarking techniques, latest trends in the field and categorized like spatial and transform domain, blind and non-blind methods, visible and non visible techniques etc. It also discusses the different optimization techniques used in the field of watermarking in order to improve the robustness and imperceptibility of the method. Different measures are discussed to evaluate the performance of the watermarking algorithm. At the end, this paper proposes a watermarking algorithm using (k.n) shares of halftone visual cryptography (HVC) instead of (2, 2) share cryptography. (k,n) shares visual cryptography improves the security of the watermark. As halftone is a method of reprographic, it helps in improving the visual quality of watermark image. The proposed method uses fractional transformation to improve the robustness of the copyright protection of the method.

Keywords: digital watermarking, fractional transform, halftone, visual cryptography

Procedia PDF Downloads 355
19345 Holy Quran’s Hermeneutics from Self-Referentiality to the Quran by Quran’s Interpretation

Authors: Mohammad Ba’azm

Abstract:

The self-referentiality method as the missing ring of the Qur’an by Qur’an’s interpretation has a precise application at the level of the Quranic vocabulary, but after entering the domain of the verses, chapters and the whole Qur’an, it reveals its defect. Self-referentiality cannot show the clear concept of the Quranic scriptures, unlike the Qur’an by Qur’an’s interpretation method that guides us to the comprehension and exact hermeneutics. The Qur’an by Qur’an’s interpretation is a solid way of comprehension of the verses of the Qur'an and does not use external resources to provide implications and meanings with different theoretical and practical supports. In this method, theoretical supports are based on the basics and modalities that support and validate the legitimacy and validity of the interpretive method discussed, and the practical supports also relate to the practitioners of the religious elite. The combination of these two methods illustrates the exact understanding of the Qur'an at the level of Quranic verses, chapters, and the whole Qur’an. This study by examining the word 'book' in the Qur'an shows the difference between the two methods, and the necessity of attachment of these, in order to attain a desirable level for comprehensions meaning of the Qur'an. In this article, we have proven that by aspects of the meaning of the Quranic words, we cannot say any word has an exact meaning.

Keywords: Qur’an’s hermeneutic, self-referentiality, The Qur’an by Qur’an’s Interpretation, polysemy

Procedia PDF Downloads 188
19344 Gesture-Controlled Interface Using Computer Vision and Python

Authors: Vedant Vardhan Rathour, Anant Agrawal

Abstract:

The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands.

Keywords: gesture recognition, hand tracking, machine learning, convolutional neural networks

Procedia PDF Downloads 12
19343 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 283
19342 Integration of Magnetoresistance Sensor in Microfluidic Chip for Magnetic Particles Detection

Authors: Chao-Ming Su, Pei-Sheng Wu, Yu-Chi Kuo, Yin-Chou Huang, Tan-Yueh Chen, Jefunnie Matahum, Tzong-Rong Ger

Abstract:

Application of magnetic particles (MPs) has been applied in biomedical field for many years. There are lots of advantages through this mediator including high biocompatibility and multi-diversified bio-applications. However, current techniques for evaluating the quantity of the magnetic-labeled sample assays are rare. In this paper, a Wheatstone bridge giant magnetoresistance (GMR) sensor integrated with a homemade detecting system was fabricated and used to quantify the concentration of MPs. The homemade detecting system has shown high detecting sensitivity of 10 μg/μl of MPs with optimized parameter vertical magnetic field 100 G, horizontal magnetic field 2 G and flow rate 0.4 ml/min.

Keywords: magnetic particles, magnetoresistive sensors, microfluidics, biosensor

Procedia PDF Downloads 399
19341 Bioeconomic Modeling for the Sustainable Exploitation of Three Key Marine Species in Morocco

Authors: I .Ait El Harch, K. Outaaoui, Y. El Foutayeni

Abstract:

This study aims to deepen the understanding and optimize fishing activity in Morocco by holistically integrating biological and economic aspects. We develop a biological equilibrium model in which these competing species present their natural growth by logistic equations, taking into account density and competition between them. The integration of human intervention adds a realistic dimension to our model. A company specifically targets the three species, thus influencing population dynamics according to their fishing activities. The aim of this work is to determine the fishing effort that maximizes the company’s profit, taking into account the constraints associated with conserving ecosystem equilibrium.

Keywords: bioeconomical modeling, optimization techniques, linear complementarity problem LCP, biological equilibrium, maximizing profits

Procedia PDF Downloads 24
19340 Fast Prototyping of Precise, Flexible, Multiplexed, Printed Electrochemical Enzyme-Linked Immunosorbent Assay System for Point-of-Care Biomarker Quantification

Authors: Zahrasadat Hosseini, Jie Yuan

Abstract:

Point-of-care (POC) diagnostic devices based on lab-on-a-chip (LOC) technology have the potential to revolutionize medical diagnostics. However, the development of an ideal microfluidic system based on LOC technology for diagnostics purposes requires overcoming several obstacles, such as improving sensitivity, selectivity, portability, cost-effectiveness, and prototyping methods. While numerous studies have introduced technologies and systems that advance these criteria, existing systems still have limitations. Electrochemical enzyme-linked immunosorbent assay (e-ELISA) in a LOC device offers numerous advantages, including enhanced sensitivity, decreased turnaround time, minimized sample and analyte consumption, reduced cost, disposability, and suitability for miniaturization, integration, and multiplexing. In this study, we present a novel design and fabrication method for a microfluidic diagnostic platform that integrates screen-printed electrochemical carbon/silver chloride electrodes on flexible printed circuit boards with flexible, multilayer, polydimethylsiloxane (PDMS) microfluidic networks to accurately manipulate and pre-immobilize analytes for performing electrochemical enzyme-linked immunosorbent assay (e-ELISA) for multiplexed quantification of blood serum biomarkers. We further demonstrate fast, cost-effective prototyping, as well as accurate and reliable detection performance of this device for quantification of interleukin-6-spiked samples through electrochemical analytics methods. We anticipate that our invention represents a significant step towards the development of user-friendly, portable, medical-grade, POC diagnostic devices.

Keywords: lab-on-a-chip, point-of-care diagnostics, electrochemical ELISA, biomarker quantification, fast prototyping

Procedia PDF Downloads 83
19339 Fast Prototyping of Precise, Flexible, Multiplexed, Printed Electrochemical Enzyme-Linked Immunosorbent Assay Platform for Point-of-Care Biomarker Quantification

Authors: Zahrasadat Hosseini, Jie Yuan

Abstract:

Point-of-care (POC) diagnostic devices based on lab-on-a-chip (LOC) technology have the potential to revolutionize medical diagnostics. However, the development of an ideal microfluidic system based on LOC technology for diagnostics purposes requires overcoming several obstacles, such as improving sensitivity, selectivity, portability, cost-effectiveness, and prototyping methods. While numerous studies have introduced technologies and systems that advance these criteria, existing systems still have limitations. Electrochemical enzyme-linked immunosorbent assay (e-ELISA) in a LOC device offers numerous advantages, including enhanced sensitivity, decreased turnaround time, minimized sample and analyte consumption, reduced cost, disposability, and suitability for miniaturization, integration, and multiplexing. In this study, we present a novel design and fabrication method for a microfluidic diagnostic platform that integrates screen-printed electrochemical carbon/silver chloride electrodes on flexible printed circuit boards with flexible, multilayer, polydimethylsiloxane (PDMS) microfluidic networks to accurately manipulate and pre-immobilize analytes for performing electrochemical enzyme-linked immunosorbent assay (e-ELISA) for multiplexed quantification of blood serum biomarkers. We further demonstrate fast, cost-effective prototyping, as well as accurate and reliable detection performance of this device for quantification of interleukin-6-spiked samples through electrochemical analytics methods. We anticipate that our invention represents a significant step towards the development of user-friendly, portable, medical-grade POC diagnostic devices.

Keywords: lab-on-a-chip, point-of-care diagnostics, electrochemical ELISA, biomarker quantification, fast prototyping

Procedia PDF Downloads 85
19338 The Solution of Nonlinear Partial Differential Equation for The Phenomenon of Instability in Homogeneous Porous Media by Homotopy Analysis Method

Authors: Kajal K. Patel, M. N. Mehta, T. R. Singh

Abstract:

When water is injected in oil formatted area in secondary oil recovery process the instability occurs near common interface due to viscosity difference of injected water and native oil. The governing equation gives rise to the non-linear partial differential equation and its solution has been obtained by Homotopy analysis method with appropriate guess value of the solution together with some conditions and standard relations. The solution gives the average cross-sectional area occupied by the schematic fingers during the occurs of instability phenomenon. The numerical and graphical presentation has developed by using Maple software.

Keywords: capillary pressure, homotopy analysis method, instability phenomenon, viscosity

Procedia PDF Downloads 496
19337 Numerical Solutions of an Option Pricing Rainfall Derivatives Model

Authors: Clarinda Vitorino Nhangumbe, Ercília Sousa

Abstract:

Weather derivatives are financial products used to cover non catastrophic weather events with a weather index as the underlying asset. The rainfall weather derivative pricing model is modeled based in the assumption that the rainfall dynamics follows Ornstein-Uhlenbeck process, and the partial differential equation approach is used to derive the convection-diffusion two dimensional time dependent partial differential equation, where the spatial variables are the rainfall index and rainfall depth. To compute the approximation solutions of the partial differential equation, the appropriate boundary conditions are suggested, and an explicit numerical method is proposed in order to deal efficiently with the different choices of the coefficients involved in the equation. Being an explicit numerical method, it will be conditionally stable, then the stability region of the numerical method and the order of convergence are discussed. The model is tested for real precipitation data.

Keywords: finite differences method, ornstein-uhlenbeck process, partial differential equations approach, rainfall derivatives

Procedia PDF Downloads 105
19336 Error Amount in Viscoelasticity Analysis Depending on Time Step Size and Method used in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain like pavements of bridges can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell elements and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Subsequently, a practical problem, which has an analytical solution given in literature, is used to verify the applicability of viscoelasticity tool embedded in ANSYS. Finally, amount of error in the results of ANSYS is compared with the analytical results to indicate the influence of used method and time step size.

Keywords: generalized Maxwell model, finite element method, prony series, time step size, viscoelasticity

Procedia PDF Downloads 369
19335 Companies’ Internationalization: Multi-Criteria-Based Prioritization Using Fuzzy Logic

Authors: Jorge Anibal Restrepo Morales, Sonia Martín Gómez

Abstract:

A model based on a logical framework was developed to quantify SMEs' internationalization capacity. To do so, linguistic variables, such as human talent, infrastructure, innovation strategies, FTAs, marketing strategies, finance, etc. were integrated. It is argued that a company’s management of international markets depends on internal factors, especially capabilities and resources available. This study considers internal factors as the biggest business challenge because they force companies to develop an adequate set of capabilities. At this stage, importance and strategic relevance have to be defined in order to build competitive advantages. A fuzzy inference system is proposed to model the resources, skills, and capabilities that determine the success of internationalization. Data: 157 linguistic variables were used. These variables were defined by international trade entrepreneurs, experts, consultants, and researchers. Using expert judgment, the variables were condensed into18 factors that explain SMEs’ export capacity. The proposed model is applied by means of a case study of the textile and clothing cluster in Medellin, Colombia. In the model implementation, a general index of 28.2 was obtained for internationalization capabilities. The result confirms that the sector’s current capabilities and resources are not sufficient for a successful integration into the international market. The model specifies the factors and variables, which need to be worked on in order to improve export capability. In the case of textile companies, the lack of a continuous recording of information stands out. Likewise, there are very few studies directed towards developing long-term plans, and., there is little consistency in exports criteria. This method emerges as an innovative management tool linked to internal organizational spheres and their different abilities.

Keywords: business strategy, exports, internationalization, fuzzy set methods

Procedia PDF Downloads 294
19334 The Forensic Analysis of Engravers' Handwriting

Authors: Olivia Rybak-Karkosz

Abstract:

The purpose of this paper is to present the result of scientific research using forensic handwriting analysis. It was conducted to verify the stability and lability of handwriting of engravers and check if gravers transfer their traits from handwriting to plates and other surfaces they rework. This research methodology consisted of completing representative samples of signatures of gravers written on a piece of paper using a ballpen and signatures engraved on other surfaces. The forensic handwriting analysis was conducted using the graphic-comparative method (graphic method), and all traits were analysed. The paper contains a concluding statement of the similarities and differences between the samples.

Keywords: artist’s signatures, engraving, forensic handwriting analysis, graphic-comparative method

Procedia PDF Downloads 102
19333 Integrating Artificial Intelligence (AI) into Education-Stakeholder Engagement and ICT Practices for Complex Systems: A Governance Framework for Addressing Counseling Gaps in Higher Education

Authors: Chinyere Ori Elom, Ikechukwu Ogeze Ukeje, Chukwudum Collins Umoke

Abstract:

This paper aims to stimulate scholarly interest in AI, ICT and the existing (complex) systems trajectory- theory, practice, and aspirations within the African continent and to shed fresh light on the shortcomings of the higher education sector (HEs) through the prism of AI-driven Solutions for enhancing Guidance and Counseling and sound governance framework (SGF) in higher education modeling. It further seeks to investigate existing prospects yet to be realized in Nigerian universities by probing innovation neglect in the localities, exploring practices in the global ICT spaces neglected by Nigeria universities’ governance regimes (UGRs), and suggesting area applicability, sustainability and solution modeling in response to peculiar ‘wicked ICT-driven problems’ and or issues facing the continent as well as other universities in emerging societies. This study will adopt a mixed-method approach to collect both qualitative and quantitative data. This paper argues that it will command great relevance in the local and global university system by developing ICT relevance sustainability policy initiatives (SPIs) powered by a multi-stakeholder engagement governance model (MSEGm) that is sufficiently dynamic, eclectic and innovative to surmount complex and constantly rising challenges of the modern-developing world. Hence, it will consider diverse actors both as producers and users alike as victims and beneficiaries of common concerns in the ICT world; thereby providing pathways on how AI’s integration into education governance can significantly reduce counseling gaps, ensuring more students are attended to especially when human counselors are unavailable.

Keywords: AI-counseling solution, stakeholder engagement, university governance, higher education

Procedia PDF Downloads 17
19332 Impact of Depreciation Technique on Taxable Income and Financial Performance of Quoted Consumer Goods Company in Nigeria

Authors: Ibrahim Ali, Adamu Danlami Ahmed

Abstract:

This study examines the impact of depreciation on taxable income and financial performance of consumer goods companies quoted on the Nigerian stock exchange. The study adopts ex-post factor research design. Data were collected using a secondary source. The findings of the study suggest that, method of depreciation adopted in any organization influence the taxable profit. Depreciation techniques can either be: depressive, accelerative and linear depreciation. It was also recommended that consumer goods should adjust their method of depreciation to make sure an appropriate method is adopted. This will go a long way to revitalize their taxable profit.

Keywords: accelerated, linear, depressive, depreciation

Procedia PDF Downloads 285
19331 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil

Authors: P. S. Jain, K. D. Bobade, S. J. Surana

Abstract:

A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.

Keywords: naftopidil, HPTLC, validation, stability, degradation

Procedia PDF Downloads 400
19330 Calibration of Discrete Element Method Parameters for Modelling DRI Pellets Flow

Authors: A. Hossein Madadi-Najafabadi, Masoud Nasiri

Abstract:

The discrete element method is a powerful technique for numerical modeling the flow of granular materials such as direct reduced iron. It would enable us to study processes and equipment related to the production and handling of the material. However, the characteristics and properties of the granules have to be adjusted precisely to achieve reliable results in a DEM simulation. The main properties for DEM simulation are size distribution, density, Young's modulus, Poisson's ratio and the contact coefficients of restitution, rolling friction and sliding friction. In the present paper, the mentioned properties are determined for DEM simulation of DRI pellets. A reliable DEM simulation would contribute to optimizing the handling system of DRIs in an iron-making plant. Among the mentioned properties, Young's modulus is the most important parameter, which is usually hard to get for particulate solids. Here, an especial method is utilized to precisely determine this parameter for DRI.

Keywords: discrete element method, direct reduced iron, simulation parameters, granular material

Procedia PDF Downloads 180
19329 Developing Digital Twins of Steel Hull Processes

Authors: V. Ložar, N. Hadžić, T. Opetuk, R. Keser

Abstract:

The development of digital twins strongly depends on efficient algorithms and their capability to mirror real-life processes. Nowadays, such efforts are required to establish factories of the future faced with new demands of custom-made production. The ship hull processes face these challenges too. Therefore, it is important to implement design and evaluation approaches based on production system engineering. In this study, the recently developed finite state method is employed to describe the stell hull process as a platform for the implementation of digital twinning technology. The application is justified by comparing the finite state method with the analytical approach. This method is employed to rebuild a model of a real shipyard ship hull process using a combination of serial and splitting lines. The key performance indicators such as the production rate, work in process, probability of starvation, and blockade are calculated and compared to the corresponding results obtained through a simulation approach using the software tool Enterprise dynamics. This study confirms that the finite state method is a suitable tool for digital twinning applications. The conclusion highlights the advantages and disadvantages of methods employed in this context.

Keywords: digital twin, finite state method, production system engineering, shipyard

Procedia PDF Downloads 99
19328 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test

Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati

Abstract:

Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.

Keywords: validation, HPLC, plasma, bioequivalence

Procedia PDF Downloads 290
19327 Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients

Authors: Denis Jordan, Daniel Golkowski, Mathias Lukas, Katharina Merz, Caroline Mlynarcik, Max Maurer, Valentin Riedl, Stefan Foerster, Eberhard F. Kochs, Andreas Bender, Ruediger Ilg

Abstract:

Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (<3.5 month in coma) were grouped in the minimal conscious state (MCS) or vegetative state (VS) on the basis of their clinical presentation (coma recovery scale-revised, CRS-R). Follow-up assessment (date II) was also based on CRS-R in a period of 8 to 24 month after date I. At date I, 63 channel EEG (Brain Products, Gilching, Germany) was recorded outside the scanner, and subsequently simultaneous FDG-PET/fMRI was acquired on an integrated Siemens Biograph mMR 3T scanner (Siemens Healthineers, Erlangen Germany). Power spectral densities, permutation entropy (PE) and symbolic transfer entropy (STE) were calculated in/between frontal, temporal, parietal and occipital EEG channels. PE and STE are based on symbolic time series analysis and were already introduced as robust markers separating wakefulness from unconsciousness in EEG during general anesthesia. While PE quantifies the regularity structure of the neighboring order of signal values (a surrogate of cortical information processing), STE reflects information transfer between two signals (a surrogate of directed connectivity in cortical networks). fMRI was carried out using SPM12 (Wellcome Trust Center for Neuroimaging, University of London, UK). Functional images were realigned, segmented, normalized and smoothed. PET was acquired for 45 minutes in list-mode. For absolute quantification of brain’s glucose consumption rate in FDG-PET, kinetic modelling was performed with Patlak’s plot method. BOLD signal intensity in fMRI and glucose uptake in PET was calculated in 8 distinct cortical areas. PCA was performed over all markers from EEG/fMRI/PET. Prognosis (persistent VS and deceased patients vs. recovery to MCS/awake from date I to date II) was evaluated using the area under the curve (AUC) including bootstrap confidence intervals (CI, *: p<0.05). Results: Prognosis was reliably indicated by the first component of PCA (AUC=0.99*, CI=0.92-1.00) showing a higher AUC when compared to the best single markers (EEG: AUC<0.96*, fMRI: AUC<0.86*, PET: AUC<0.60). CRS-R did not show prediction (AUC=0.51, CI=0.29-0.78). Conclusion: In a multimodal analysis of EEG/fMRI/PET in coma patients, PCA lead to a reliable prognosis. The impact of this result is evident, as clinical estimates of prognosis are inapt at time and could be supported by quantitative biomarkers from EEG, fMRI and PET. Due to the small sample size, further investigations are required, in particular allowing superwised learning instead of the basic approach of unsuperwised PCA.

Keywords: coma states and prognosis, electroencephalogram, entropy, functional magnetic resonance imaging, machine learning, positron emission tomography, principal component analysis

Procedia PDF Downloads 339