Search results for: modified Navier method
18633 A Comparative Study between FEM and Meshless Methods
Authors: Jay N. Vyas, Sachin Daxini
Abstract:
Numerical simulation techniques are widely used now in product development and testing instead of expensive, time-consuming and sometimes dangerous laboratory experiments. Numerous numerical methods are available for performing simulation of physical problems of different engineering fields. Grid based methods, like Finite Element Method, are extensively used in performing various kinds of static, dynamic, structural and non-structural analysis during product development phase. Drawbacks of grid based methods in terms of discontinuous secondary field variable, dealing fracture mechanics and large deformation problems led to development of a relatively a new class of numerical simulation techniques in last few years, which are popular as Meshless methods or Meshfree Methods. Meshless Methods are expected to be more adaptive and flexible than Finite Element Method because domain descretization in Meshless Method requires only nodes. Present paper introduces Meshless Methods and differentiates it with Finite Element Method in terms of following aspects: Shape functions used, role of weight function, techniques to impose essential boundary conditions, integration techniques for discrete system equations, convergence rate, accuracy of solution and computational effort. Capabilities, benefits and limitations of Meshless Methods are discussed and concluded at the end of paper.Keywords: numerical simulation, Grid-based methods, Finite Element Method, Meshless Methods
Procedia PDF Downloads 38918632 Biodegradability and Thermal Properties of Polycaprolactone/Starch Nanocomposite as a Biopolymer
Authors: Emad A. Jaffar Al-Mulla
Abstract:
In this study, a biopolymer-based nanocomposite was successfully prepared through melt blending technique. Two biodegradable polymers, polycaprolactone and starch, environmental friendly and obtained from renewable, easily available raw materials, have been chosen. Fatty hydrazide, synthesized from palm oil, has been used as a surfactant to modify montmorillonite (natural clay) for preparation of polycaprolactone/starch nanocomposite. X-ray diffraction and transmission electron microscopy were used to characterize nanocomposite formation. Compatibility of the blend was improved by adding 3% weight modified clay. Higher biodegradability and thermal stability of nanocomopeite were also observed compared to those of the polycaprolactone/starch blend. This product will solve the problem of plastic waste, especially disposable packaging, and reduce the dependence on petroleum-based polymers and surfactants.Keywords: polycaprolactone, starch, biodegradable, nanocomposite
Procedia PDF Downloads 35818631 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing
Authors: S. Bouhouche, R. Drai, J. Bast
Abstract:
This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement
Procedia PDF Downloads 28318630 Overcoming Obstacles in UHTHigh-protein Whey Beverages by Microparticulation Process: Scientific and Technological Aspects
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh, Seyed Jalal Razavi Zahedkolaei
Abstract:
Herein, a shelf stable (no refrigeration required) UHT processed, aseptically packaged whey protein drink was formulated by using a new strategy in microparticulate process. Applying thermal and two-dimensional mechanical treatments simultaneously, a modified protein (MWPC-80) was produced. Then the physical, thermal and thermodynamic properties of MWPC-80 were assessed using particle size analysis, dynamic temperature sweep (DTS), and differential scanning calorimetric (DSC) tests. Finally, using MWPC-80, a new RTD beverage was formulated, and shelf stability was assessed for three months at ambient temperature (25 °C). Non-isothermal dynamic temperature sweep was performed, and the results were analyzed by a combination of classic rate equation, Arrhenius equation, and time-temperature relationship. Generally, results showed that temperature dependency of the modified sample was significantly (Pvalue<0.05) less than the control one contained WPC-80. The changes in elastic modulus of the MWPC did not show any critical point at all the processed stages, whereas, the control sample showed two critical points during heating (82.5 °C) and cooling (71.10 °C) stages. Thermal properties of samples (WPC-80 & MWPC-80) were assessed using DSC with 4 °C /min heating speed at 20-90 °C heating range. Results did not show any thermal peak in MWPC DSC curve, which suggested high thermal resistance. On the other hands, WPC-80 sample showed a significant thermal peak with thermodynamic properties of ∆G:942.52 Kj/mol ∆H:857.04 Kj/mole and ∆S:-1.22Kj/mole°K. Dynamic light scattering was performed and results showed 0.7 µm and 15 nm average particle size for MWPC-80 and WPC-80 samples, respectively. Moreover, particle size distribution of MWPC-80 and WPC-80 were Gaussian-Lutresian and normal, respectively. After verification of microparticulation process by DTS, PSD and DSC analyses, a 10% why protein beverage (10% w/w/ MWPC-80, 0.6% w/w vanilla flavoring agent, 0.1% masking flavor, 0.05% stevia natural sweetener and 0.25% citrate buffer) was formulated and UHT treatment was performed at 137 °C and 4 s. Shelf life study did not show any jellification or precipitation of MWPC-80 contained beverage during three months storage at ambient temperature, whereas, WPC-80 contained beverage showed significant precipitation and jellification after thermal processing, even at 3% w/w concentration. Consumer knowledge on nutritional advantages of whey protein increased the request for using this protein in different food systems especially RTD beverages. These results could make a huge difference in this industry.Keywords: high protein whey beverage, micropartiqulation, two-dimentional mechanical treatments, thermodynamic properties
Procedia PDF Downloads 7418629 Estimation of Population Mean under Random Non-Response in Two-Phase Successive Sampling
Authors: M. Khalid, G. N. Singh
Abstract:
In this paper, we have considered the problem of estimation for population mean, on current (second) occasion in the presence of random non response in two-occasion successive sampling under two phase set-up. Modified exponential type estimators have been proposed, and their properties are studied under the assumptions that numbers of sampling units follow a distribution due to random non response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.Keywords: successive sampling, random non-response, auxiliary variable, bias, mean square error
Procedia PDF Downloads 52218628 Application of Mesenchymal Stem Cells in Diabetic Therapy
Authors: K. J. Keerthi, Vasundhara Kamineni, A. Ravi Shanker, T. Rammurthy, A. Vijaya Lakshmi, Q. Hasan
Abstract:
Pancreatic β-cells are the predominant insulin-producing cell types within the Islets of Langerhans and insulin is the primary hormone which regulates carbohydrate and fat metabolism. Apoptosis of β-cells or insufficient insulin production leads to Diabetes Mellitus (DM). Current therapy for diabetes includes either medical management or insulin replacement and regular monitoring. Replacement of β- cells is an attractive treatment option for both Type-1 and Type-2 DM in view of the recent paper which indicates that β-cells apoptosis is the common underlying cause for both the Types of DM. With the development of Edmonton protocol, pancreatic β-cells allo-transplantation became possible, but this is still not considered as standard of care due to subsequent requirement of lifelong immunosuppression and the scarcity of suitable healthy organs to retrieve pancreatic β-cell. Fetal pancreatic cells from abortuses were developed as a possible therapeutic option for Diabetes, however, this posed several ethical issues. Hence, in the present study Mesenchymal stem cells (MSCs) were differentiated into insulin producing cells which were isolated from Human Umbilical cord (HUC) tissue. MSCs have already made their mark in the growing field of regenerative medicine, and their therapeutic worth has already been validated for a number of conditions. HUC samples were collected with prior informed consent as approved by the Institutional ethical committee. HUC (n=26) were processed using a combination of both mechanical and enzymatic (collagenase-II, 100 U/ml, Gibco ) methods to obtain MSCs which were cultured in-vitro in L-DMEM (Low glucose Dulbecco's Modified Eagle's Medium, Sigma, 4.5 mM glucose/L), 10% FBS in 5% CO2 incubator at 37°C. After reaching 80-90% confluency, MSCs were characterized with Flowcytometry and Immunocytochemistry for specific cell surface antigens. Cells expressed CD90+, CD73+, CD105+, CD34-, CD45-, HLA-DR-/Low and Vimentin+. These cells were differentiated to β-cells by using H-DMEM (High glucose Dulbecco's Modified Eagle's Medium,25 mM glucose/L, Gibco), β-Mercaptoethanol (0.1mM, Hi-Media), basic Fibroblast growth factor (10 µg /L,Gibco), and Nicotinamide (10 mmol/L, Hi-Media). Pancreatic β-cells were confirmed by positive Dithizone staining and were found to be functionally active as they released 8 IU/ml insulin on glucose stimulation. Isolating MSCs from usually discarded, abundantly available HUC tissue, expanding and differentiating to β-cells may be the most feasible cell therapy option for the millions of people suffering from DM globally.Keywords: diabetes mellitus, human umbilical cord, mesenchymal stem cells, differentiation
Procedia PDF Downloads 26018627 Detection of Intentional Attacks in Images Based on Watermarking
Authors: Hazem Munawer Al-Otum
Abstract:
In this work, an efficient watermarking technique is proposed and can be used for detecting intentional attacks in RGB color images. The proposed technique can be implemented for image authentication and exhibits high robustness against unintentional common image processing attacks. It deploys two measures to discern between intentional and unintentional attacks based on using a quantization-based technique in a modified 2D multi-pyramidal DWT transform. Simulations have shown high accuracy in detecting intentionally attacked regions while exhibiting high robustness under moderate to severe common image processing attacks.Keywords: image authentication, copyright protection, semi-fragile watermarking, tamper detection
Procedia PDF Downloads 25518626 Solving Operating Room Scheduling Problem by Using Dispatching Rule
Authors: Yang-Kuei Lin, Yin-Yi Chou
Abstract:
In this research, we have considered operating room scheduling problem. The objective is to minimize total operating cost. The total operating cost includes idle cost and overtime cost. We have proposed a dispatching rule that can guarantee to find feasible solutions for the studied problem efficiently. We compared the proposed dispatching rule with the optimal solutions found by solving Inter Programming, and other solutions found by using modified existing dispatching rules. The computational results indicates that the proposed heuristic can find near optimal solutions efficiently.Keywords: assignment, dispatching rule, operation rooms, scheduling
Procedia PDF Downloads 23318625 The Solution of Nonlinear Partial Differential Equation for The Phenomenon of Instability in Homogeneous Porous Media by Homotopy Analysis Method
Authors: Kajal K. Patel, M. N. Mehta, T. R. Singh
Abstract:
When water is injected in oil formatted area in secondary oil recovery process the instability occurs near common interface due to viscosity difference of injected water and native oil. The governing equation gives rise to the non-linear partial differential equation and its solution has been obtained by Homotopy analysis method with appropriate guess value of the solution together with some conditions and standard relations. The solution gives the average cross-sectional area occupied by the schematic fingers during the occurs of instability phenomenon. The numerical and graphical presentation has developed by using Maple software.Keywords: capillary pressure, homotopy analysis method, instability phenomenon, viscosity
Procedia PDF Downloads 49618624 Numerical Solutions of an Option Pricing Rainfall Derivatives Model
Authors: Clarinda Vitorino Nhangumbe, Ercília Sousa
Abstract:
Weather derivatives are financial products used to cover non catastrophic weather events with a weather index as the underlying asset. The rainfall weather derivative pricing model is modeled based in the assumption that the rainfall dynamics follows Ornstein-Uhlenbeck process, and the partial differential equation approach is used to derive the convection-diffusion two dimensional time dependent partial differential equation, where the spatial variables are the rainfall index and rainfall depth. To compute the approximation solutions of the partial differential equation, the appropriate boundary conditions are suggested, and an explicit numerical method is proposed in order to deal efficiently with the different choices of the coefficients involved in the equation. Being an explicit numerical method, it will be conditionally stable, then the stability region of the numerical method and the order of convergence are discussed. The model is tested for real precipitation data.Keywords: finite differences method, ornstein-uhlenbeck process, partial differential equations approach, rainfall derivatives
Procedia PDF Downloads 10718623 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 11418622 Error Amount in Viscoelasticity Analysis Depending on Time Step Size and Method used in ANSYS
Authors: A. Fettahoglu
Abstract:
Theory of viscoelasticity is used by many researchers to represent behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain like pavements of bridges can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell elements and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Subsequently, a practical problem, which has an analytical solution given in literature, is used to verify the applicability of viscoelasticity tool embedded in ANSYS. Finally, amount of error in the results of ANSYS is compared with the analytical results to indicate the influence of used method and time step size.Keywords: generalized Maxwell model, finite element method, prony series, time step size, viscoelasticity
Procedia PDF Downloads 36918621 Survivable IP over WDM Network Design Based on 1 ⊕ 1 Network Coding
Authors: Nihed Bahria El Asghar, Imen Jouili, Mounir Frikha
Abstract:
Inter-datacenter transport network is very bandwidth and delay demanding. The data transferred over such a network is also highly QoS-exigent mostly because a huge volume of data should be transported transparently with regard to the application user. To avoid the data transfer failure, a backup path should be reserved. No re-routing delay should be observed. A dedicated 1+1 protection is however not applicable in inter-datacenter transport network because of the huge spare capacity. In this context, we propose a survivable virtual network with minimal backup based on network coding (1 ⊕ 1) and solve it using a modified Dijkstra-based heuristic.Keywords: network coding, dedicated protection, spare capacity, inter-datacenters transport network
Procedia PDF Downloads 44718620 The Forensic Analysis of Engravers' Handwriting
Authors: Olivia Rybak-Karkosz
Abstract:
The purpose of this paper is to present the result of scientific research using forensic handwriting analysis. It was conducted to verify the stability and lability of handwriting of engravers and check if gravers transfer their traits from handwriting to plates and other surfaces they rework. This research methodology consisted of completing representative samples of signatures of gravers written on a piece of paper using a ballpen and signatures engraved on other surfaces. The forensic handwriting analysis was conducted using the graphic-comparative method (graphic method), and all traits were analysed. The paper contains a concluding statement of the similarities and differences between the samples.Keywords: artist’s signatures, engraving, forensic handwriting analysis, graphic-comparative method
Procedia PDF Downloads 10218619 Impact of Depreciation Technique on Taxable Income and Financial Performance of Quoted Consumer Goods Company in Nigeria
Authors: Ibrahim Ali, Adamu Danlami Ahmed
Abstract:
This study examines the impact of depreciation on taxable income and financial performance of consumer goods companies quoted on the Nigerian stock exchange. The study adopts ex-post factor research design. Data were collected using a secondary source. The findings of the study suggest that, method of depreciation adopted in any organization influence the taxable profit. Depreciation techniques can either be: depressive, accelerative and linear depreciation. It was also recommended that consumer goods should adjust their method of depreciation to make sure an appropriate method is adopted. This will go a long way to revitalize their taxable profit.Keywords: accelerated, linear, depressive, depreciation
Procedia PDF Downloads 28518618 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil
Authors: P. S. Jain, K. D. Bobade, S. J. Surana
Abstract:
A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.Keywords: naftopidil, HPTLC, validation, stability, degradation
Procedia PDF Downloads 40018617 Calibration of Discrete Element Method Parameters for Modelling DRI Pellets Flow
Authors: A. Hossein Madadi-Najafabadi, Masoud Nasiri
Abstract:
The discrete element method is a powerful technique for numerical modeling the flow of granular materials such as direct reduced iron. It would enable us to study processes and equipment related to the production and handling of the material. However, the characteristics and properties of the granules have to be adjusted precisely to achieve reliable results in a DEM simulation. The main properties for DEM simulation are size distribution, density, Young's modulus, Poisson's ratio and the contact coefficients of restitution, rolling friction and sliding friction. In the present paper, the mentioned properties are determined for DEM simulation of DRI pellets. A reliable DEM simulation would contribute to optimizing the handling system of DRIs in an iron-making plant. Among the mentioned properties, Young's modulus is the most important parameter, which is usually hard to get for particulate solids. Here, an especial method is utilized to precisely determine this parameter for DRI.Keywords: discrete element method, direct reduced iron, simulation parameters, granular material
Procedia PDF Downloads 18018616 Developing Digital Twins of Steel Hull Processes
Authors: V. Ložar, N. Hadžić, T. Opetuk, R. Keser
Abstract:
The development of digital twins strongly depends on efficient algorithms and their capability to mirror real-life processes. Nowadays, such efforts are required to establish factories of the future faced with new demands of custom-made production. The ship hull processes face these challenges too. Therefore, it is important to implement design and evaluation approaches based on production system engineering. In this study, the recently developed finite state method is employed to describe the stell hull process as a platform for the implementation of digital twinning technology. The application is justified by comparing the finite state method with the analytical approach. This method is employed to rebuild a model of a real shipyard ship hull process using a combination of serial and splitting lines. The key performance indicators such as the production rate, work in process, probability of starvation, and blockade are calculated and compared to the corresponding results obtained through a simulation approach using the software tool Enterprise dynamics. This study confirms that the finite state method is a suitable tool for digital twinning applications. The conclusion highlights the advantages and disadvantages of methods employed in this context.Keywords: digital twin, finite state method, production system engineering, shipyard
Procedia PDF Downloads 9918615 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test
Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati
Abstract:
Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.Keywords: validation, HPLC, plasma, bioequivalence
Procedia PDF Downloads 29118614 Use of Cobalt Graphene in Place of Platnium in Catalytic Converter
Authors: V. Srinivasan, S. M. Sriram Nandan
Abstract:
Today in the modern world the most important problem faced by the mankind is increasing the pollution in a very high rate. It affects the ecosystem of the environment and also aids to increase the greenhouse effect. The exhaust gases from the automobile is the major cause of a pollution. Automobiles have increased to a large number which has increased the pollution of our world to an alarming rate. There are two methods of controlling the pollution namely, pre-pollution control method and post-pollution control method. This paper is based on controlling the emission by post-pollution control method. The ratio of surface area of nanoparticles to the volume of the nanoparticles is inversely proportional to the radius of the nanoparticles. So decreasing the radius, this ratio is leading resulting in an increased rate of reaction and thus the concentration of the pollution is decreased. To achieve this objective, use of cobalt-graphene element is proposed. The proposed method is mainly to decrease the cost of platinum as it is expensive. This has a longer life than the platinum-based catalysts.Keywords: automobile emissions, catalytic converter, cobalt-graphene, replacement of platinum
Procedia PDF Downloads 39018613 Synthesis and Characterization of Graphene Composites with Application for Sustainable Energy
Authors: Daniel F. Sava, Anton Ficai, Bogdan S. Vasile, Georgeta Voicu, Ecaterina Andronescu
Abstract:
The energy crisis and environmental contamination are very serious problems, therefore searching for better and sustainable renewable energy is a must. It is predicted that the global energy demand will double until 2050. Solar water splitting and photocatalysis are considered as one of the solutions to these issues. The use of oxide semiconductors for solar water splitting and photocatalysis started in 1972 with the experiments of Fujishima and Honda on TiO2 electrodes. Since then, the evolution of nanoscience and characterization methods leads to a better control of size, shape and properties of materials. Although the past decade advancements are astonishing, for these applications the properties have to be controlled at a much finer level, allowing the control of charge-carrier lives, energy level positions, charge trapping centers, etc. Graphene has attracted a lot of attention, since its discovery in 2004, due to the excellent electrical, optical, mechanical and thermal properties that it possesses. These properties make it an ideal support for photocatalysts, thus graphene composites with oxide semiconductors are of great interest. We present in this work the synthesis and characterization of graphene-related materials and oxide semiconductors and their different composites. These materials can be used in constructing devices for different applications (batteries, water splitting devices, solar cells, etc), thus showing their application flexibility. The synthesized materials are different morphologies and sizes of TiO2, ZnO and Fe2O3 that are obtained through hydrothermal, sol-gel methods and graphene oxide which is synthesized through a modified Hummer method and reduced with different agents. Graphene oxide and the reduced form could also be used as a single material for transparent conductive films. The obtained single materials and composites were characterized through several methods: XRD, SEM, TEM, IR spectroscopy, RAMAN, XPS and BET adsorption/desorption isotherms. From the results, we see the variation of the properties with the variation of synthesis parameters, size and morphology of the particles.Keywords: composites, graphene, hydrothermal, renewable energy
Procedia PDF Downloads 49818612 A Redesigned Pedagogy in Introductory Programming Reduces Failure and Withdrawal Rates by Half
Authors: Said Fares, Mary Fares
Abstract:
It is well documented that introductory computer programming courses are difficult and that failure rates are high. The aim of this project was to reduce the high failure and withdrawal rates in learning to program. This paper presents a number of changes in module organization and instructional delivery system in teaching CS1. Daily out of class help sessions and tutoring services were applied, interactive lectures and laboratories, online resources, and timely feedback were introduced. Five years of data of 563 students in 21 sections was collected and analyzed. The primary results show that the failure and withdrawal rates were cut by more than half. Student surveys indicate a positive evaluation of the modified instructional approach, overall satisfaction with the course and consequently, higher success and retention rates.Keywords: failure rate, interactive learning, student engagement, CS1
Procedia PDF Downloads 30818611 Optimizing Sustainable Graphene Production: Extraction of Graphite from Spent Primary and Secondary Batteries for Advanced Material Synthesis
Authors: Pratima Kumari, Sukha Ranjan Samadder
Abstract:
This research aims to contribute to the sustainable production of graphene materials by exploring the extraction of graphite from spent primary and secondary batteries. The increasing demand for graphene materials, a versatile and high-performance material, necessitates environmentally friendly methods for its synthesis. The process involves a well-planned methodology, beginning with the gathering and categorization of batteries, followed by the disassembly and careful removal of graphite from anode structures. The use of environmentally friendly solvents and mechanical techniques ensures an efficient and eco-friendly extraction of graphite. Advanced approaches such as the modified Hummers' method and chemical reduction process are utilized for the synthesis of graphene materials, with a focus on optimizing parameters. Various analytical techniques such as Fourier-transform infrared spectroscopy, X-ray diffraction, scanning electron microscopy, thermogravimetric analysis, and Raman spectroscopy were employed to validate the quality and structure of the produced graphene materials. The major findings of this study reveal the successful implementation of the methodology, leading to the production of high-quality graphene materials suitable for advanced material applications. Thorough characterization using various advanced techniques validates the structural integrity and purity of the graphene. The economic viability of the process is demonstrated through a comprehensive economic analysis, highlighting the potential for large-scale production. This research contributes to the field of sustainable production of graphene materials by offering a systematic methodology that efficiently transforms spent batteries into valuable graphene resources. Furthermore, the findings not only showcase the potential for upcycling electronic waste but also address the pressing need for environmentally conscious processes in advanced material synthesis.Keywords: spent primary batteries, spent secondary batteries, graphite extraction, advanced material synthesis, circular economy approach
Procedia PDF Downloads 5418610 Developing Scaffolds for Tissue Regeneration using Low Temperature Plasma (LTP)
Authors: Komal Vig
Abstract:
Cardiovascular disease (CVD)-related deaths occur in 17.3 million people globally each year, accounting for 30% of all deaths worldwide, with a predicted annual incidence of deaths to reach 23.3 million globally by 2030. Autologous bypass grafts remain an important therapeutic option for the treatment of CVD, but the poor quality of the donor patient’s blood vessels, the invasiveness of the resection surgery, and postoperative movement restrictions create issues. The present study is aimed to improve the endothelialization of intimal surface of graft by using low temperature plasma (LTP) to increase the cell attachment and proliferation. Polytetrafluoroethylene (PTFE) was treated with LTP. Air was used as the feed-gas, and the pressure in the plasma chamber was kept at 800 mTorr. Scaffolds were also modified with gelatin and collagen by dipping method. Human umbilical vein endothelial cells (HUVEC) were plated on the developed scaffolds, and cell proliferation was determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay and by microscopy. mRNA expressions levels of different cell markers were investigated using quantitative real-time PCR (qPCR). XPS confirmed the introduction of oxygenated functionalities from LTP. HUVEC cells showed 80% seeding efficiency on the scaffold. Microscopic and MTT assays indicated increase in cell viability in LTP treated scaffolds, especially when treated with gelatin or collagen, compared to untreated scaffolds. Gene expression studies shows enhanced expression of cell adhesion marker Integrin- α 5 gene after LTP treatment. LTP treated scaffolds exhibited better cell proliferation and viability compared to untreated scaffolds. Protein treatment of scaffold increased cell proliferation. Based on our initial results, more scaffolds alternatives will be developed and investigated for cell growth and vascularization studies. Acknowledgments: This work is supported by the NSF EPSCoR RII-Track-1 Cooperative Agreement OIA-2148653.Keywords: LTP, HUVEC cells, vascular graft, endothelialization
Procedia PDF Downloads 7118609 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 13518608 Solving Momentum and Energy Equation by Using Differential Transform Techniques
Authors: Mustafa Ekici
Abstract:
Natural convection is a basic process which is important in a wide variety of practical applications. In essence, a heated fluid expands and rises from buoyancy due to decreased density. Numerous papers have been written on natural or mixed convection in vertical ducts heated on the side. These equations have been proved to be valuable tools for the modelling of many phenomena such as fluid dynamics. Finding solutions to such equations or system of equations are in general not an easy task. We propose a method, which is called differential transform method, of solving a non-linear equations and compare the results with some of the other techniques. Illustrative examples shows that the results are in good agreement.Keywords: differential transform method, momentum, energy equation, boundry value problem
Procedia PDF Downloads 46118607 Approximate Solution to Non-Linear Schrödinger Equation with Harmonic Oscillator by Elzaki Decomposition Method
Authors: Emad K. Jaradat, Ala’a Al-Faqih
Abstract:
Nonlinear Schrödinger equations are regularly experienced in numerous parts of science and designing. Varieties of analytical methods have been proposed for solving these equations. In this work, we construct an approximate solution for the nonlinear Schrodinger equations, with harmonic oscillator potential, by Elzaki Decomposition Method (EDM). To illustrate the effects of harmonic oscillator on the behavior wave function, nonlinear Schrodinger equation in one and two dimensions is provided. The results show that, it is more perfectly convenient and easy to apply the EDM in one- and two-dimensional Schrodinger equation.Keywords: non-linear Schrodinger equation, Elzaki decomposition method, harmonic oscillator, one and two-dimensional Schrodinger equation
Procedia PDF Downloads 18718606 Automatic MC/DC Test Data Generation from Software Module Description
Authors: Sekou Kangoye, Alexis Todoskoff, Mihaela Barreau
Abstract:
Modified Condition/Decision Coverage (MC/DC) is a structural coverage criterion that is highly recommended or required for safety-critical software coverage. Therefore, many testing standards include this criterion and require it to be satisfied at a particular level of testing (e.g. validation and unit levels). However, an important amount of time is needed to meet those requirements. In this paper we propose to automate MC/DC test data generation. Thus, we present an approach to automatically generate MC/DC test data, from software module description written over a dedicated language. We introduce a new merging approach that provides high MC/DC coverage for the description, with only a little number of test cases.Keywords: domain-specific language, MC/DC, test data generation, safety-critical software coverage
Procedia PDF Downloads 44118605 Effects of E-Learning Mode of Instruction and Conventional Mode of Instruction on Student’s Achievement in English Language in Senior Secondary Schools, Ibadan Municipal, Nigeria
Authors: Ibode Osa Felix
Abstract:
The use of e-Learning is presently intensified in the academic world following the outbreak of the Covid-19 pandemic in early 2020. Hitherto, e-learning had made its debut in teaching and learning many years ago when it emerged as an aspect of Computer Based Teaching, but never before has its patronage become so important and popular as currently obtains. Previous studies revealed that there is an ongoing debate among researchers on the efficacy of the E-learning mode of instruction over the traditional teaching method. Therefore, the study examined the effect of E-learning and Conventional Mode of Instruction on Students Achievement in the English Language. The study is a quasi-experimental study in which 230 students, from three public secondary schools, were selected through a simple random sampling technique. Three instruments were developed, namely, E-learning Instructional Guide (ELIG), Conventional Method of Instructional Guide (CMIG), and English Language Achievement Test (ELAT). The result revealed that students taught through the conventional method had better results than students taught online. The result also shows that girls taught with the conventional method of teaching performed better than boys in the English Language. The study, therefore, recommended that effort should be made by the educational authorities in Nigeria to provide internet facilities to enhance practices among learners and provide electricity to power e-learning equipment in the secondary schools. This will boost e-learning practices among teachers and students and consequently overtake conventional method of teaching in due course.Keywords: e-learning, conventional method of teaching, achievement in english, electricity
Procedia PDF Downloads 17018604 Seismic Resistant Mechanism of Two-by-four Wooden Frame with Vibration Control Device
Authors: Takumi Ito, Kurumi Kurokawa, Dong Hang Wu, Takashi Nagumo, Haruhiko Hirata
Abstract:
The structural system of wooden house by two-by-four method is widely adopted in any countries, and a various type of vibration control system for building structures has been developed on country with frequent earthquake. In this study, a vibration control device called “Scaling Frame” (SF) is suggested, and which is applied to wooden two-by-four method structures. This paper performs the experimental study to investigate the restoring force characteristics of two-by-four with SF device installed. The seismic resistant performance is estimated experimentally, and also the applicability and effectiveness are discussing.Keywords: two-by-four method, seismic vibration control, horizontally loading test, restoring force characteristics
Procedia PDF Downloads 299