Search results for: operation estimation
715 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump
Authors: Ravi Verma
Abstract:
Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity
Procedia PDF Downloads 89714 Wearable System for Prolonged Cooling and Dehumidifying of PPE in Hot Environments
Abstract:
While personal protective equipment (PPE) prevents the healthcare personnel from exposing to harmful surroundings, it creates a barrier to the dissipation of body heat and perspiration, leading to severe heat stress during prolonged exposure, especially in hot environments. It has been found that most of the existed personal cooling strategies have limitations in achieving effective cooling performance with long duration and lightweight. This work aimed to develop a lightweight (<1.0 kg) and less expensive wearable air cooling and dehumidifying system (WCDS) that can be applied underneath the protective clothing and provide 50W mean cooling power for more than 5 hours at 35°C environmental temperature without compromising the protection of PPE. For the WCDS, blowers will be used to activate an internal air circulation inside the clothing microclimate, which doesn't interfere with the protection of PPE. An air cooling and dehumidifying chamber (ACMR) with a specific design will be developed to reduce the air temperature and humidity inside the protective clothing. Then the cooled and dried air will be supplied to upper chest and back areas through a branching tubing system for personal cooling. A detachable ice cooling unit will be applied from the outside of the PPE to extract heat from the clothing microclimate. This combination allows for convenient replacement of the cooling unit to refresh the cooling effect, which can realize a continuous cooling function without taking off the PPE or adding too much weight. A preliminary thermal manikin test showed that the WCDS was able to reduce the microclimate temperature inside the PPE averagely by about 8°C for 60 minutes when the environmental temperature was 28.0 °C and 33.5 °C, respectively. Replacing the ice cooling unit every hour can maintain this cooling effect, while the longest operation duration is determined by the battery of the blowers, which can last for about 6 hours. This unique design is especially helpful for the PPE users, such as health care workers in infectious and hot environments when continuous cooling and dehumidifying are needed, but the change of protective clothing may increase the risk of infection. The new WCDS will not only improve the thermal comfort of PPE users but can also extend their safe working duration.Keywords: personal thermal management, heat stress, ppe, health care workers, wearable device
Procedia PDF Downloads 79713 Real Energy Performance Study of Large-Scale Solar Water Heater by Using Remote Monitoring
Authors: F. Sahnoune, M. Belhamel, M. Zelmat
Abstract:
Solar thermal systems available today provide reliability, efficiency and significant environmental benefits. In housing, they can satisfy the hot water demand and reduce energy bills by 60 % or more. Additionally, collective systems or large scale solar thermal systems are increasingly used in different conditions for hot water applications and space heating in hotels and multi-family homes, hospitals, nursing homes and sport halls as well as in commercial and industrial building. However, in situ real performance data for collective solar water heating systems has not been extensively outlined. This paper focuses on the study of real energy performances of a collective solar water heating system using the remote monitoring technique in Algerian climatic conditions. This is to ensure proper operation of the system at any time, determine the system performance and to check to what extent solar performance guarantee can be achieved. The measurements are performed on an active indirect heating system of 12 m2 flat plate collector’s surface installed in Algiers and equipped with a various sensors. The sensors transmit measurements to a local station which controls the pumps, valves, electrical auxiliaries, etc. The simulation of the installation was developed using the software SOLO 2000. The system provides a yearly solar yield of 6277.5 KWh for an estimated annual need of 7896 kWh; the yearly average solar cover rate amounted to 79.5%. The productivity is in the order of 523.13 kWh / m²/year. Simulation results are compared to measured results and to guaranteed solar performances. The remote monitoring shows that 90% of the expected solar results can be easy guaranteed on a long period. Furthermore, the installed remote monitoring unit was able to detect some dysfunctions. It follows that remote monitoring is an important tool in energy management of some building equipment.Keywords: large-scale solar water heater, real energy performance, remote monitoring, solar performance guarantee, tool to promote solar water heater
Procedia PDF Downloads 243712 Lean Production to Increase Reproducibility and Work Safety in the Laser Beam Melting Process Chain
Authors: C. Bay, A. Mahr, H. Groneberg, F. Döpper
Abstract:
Additive Manufacturing processes are becoming increasingly established in the industry for the economic production of complex prototypes and functional components. Laser beam melting (LBM), the most frequently used Additive Manufacturing technology for metal parts, has been gaining in industrial importance for several years. The LBM process chain – from material storage to machine set-up and component post-processing – requires many manual operations. These steps often depend on the manufactured component and are therefore not standardized. These operations are often not performed in a standardized manner, but depend on the experience of the machine operator, e.g., levelling of the build plate and adjusting the first powder layer in the LBM machine. This lack of standardization limits the reproducibility of the component quality. When processing metal powders with inhalable and alveolar particle fractions, the machine operator is at high risk due to the high reactivity and the toxic (e.g., carcinogenic) effect of the various metal powders. Faulty execution of the operation or unintentional omission of safety-relevant steps can impair the health of the machine operator. In this paper, all the steps of the LBM process chain are first analysed in terms of their influence on the two aforementioned challenges: reproducibility and work safety. Standardization to avoid errors increases the reproducibility of component quality as well as the adherence to and correct execution of safety-relevant operations. The corresponding lean method 5S will therefore be applied, in order to develop approaches in the form of recommended actions that standardize the work processes. These approaches will then be evaluated in terms of ease of implementation and their potential for improving reproducibility and work safety. The analysis and evaluation showed that sorting tools and spare parts as well as standardizing the workflow are likely to increase reproducibility. Organizing the operational steps and production environment decreases the hazards of material handling and consequently improves work safety.Keywords: additive manufacturing, lean production, reproducibility, work safety
Procedia PDF Downloads 184711 Thomas Kuhn, the Accidental Theologian: An Argument for the Similarity of Science and Religion
Authors: Dominic McGann
Abstract:
Applying Kuhn’s model of paradigm shifts in science to cases of doctrinal change in religion has been a common area of study in recent years. Few authors, however, have sought an explanation for the ease with which this model of theory change in science can be applied to cases of religious change. In order to provide such an explanation of this analytic phenomenon, this paper aims to answer one central question: Why is it that a theory that was intended to be used in an analysis of the history of science can be applied to something as disparate as the doctrinal history of religion with little to no modification? By way of answering this question, this paper begins with an explanation of Kuhn’s model and its applications in the field of religious studies. Following this, Massa’s recently proposed explanation for this phenomenon, and its notable flaws will be explained by way of framing the central proposal of this article, that the operative parts of scientific and religious changes function on the same fundamental concept of changes in understanding. Focusing its argument on this key concept, this paper seeks to illustrate its operation in cases of religious conversion and in Kuhn’s notion of the incommensurability of different scientific paradigms. The conjecture of this paper is that just as a Pagan-turned-Christian ceases to hear Thor’s hammer when they hear a clap of thunder, so too does a Ptolemaic-turned-Copernican-astronomer cease to see the Sun orbiting the Earth when they view a sunrise. In both cases, the agent in question has undergone a similar change in universal understanding, which provides us with a fundamental connection between changes in religion and changes in science. Following an exploration of this connection, this paper will consider the implications that such a connection has for the concept of the division between religion and science. This will, in turn, lead to the conclusion that religion and science are more alike than they are opposed with regards to the fundamental notion of understanding, thereby providing an answer to our central question. The major finding of this paper is that Kuhn’s model can be applied to religious cases so easily because changes in science and changes in religion operate on the same type of change in understanding. Therefore, in summary, science and religion share a crucial similarity and are not as disparate as they first appear.Keywords: Thomas Kuhn, science and religion, paradigm shifts, incommensurability, insight and understanding, philosophy of science, philosophy of religion
Procedia PDF Downloads 171710 Effects of Machining Parameters on the Surface Roughness and Vibration of the Milling Tool
Authors: Yung C. Lin, Kung D. Wu, Wei C. Shih, Jui P. Hung
Abstract:
High speed and high precision machining have become the most important technology in manufacturing industry. The surface roughness of high precision components is regarded as the important characteristics of the product quality. However, machining chatter could damage the machined surface and restricts the process efficiency. Therefore, selection of the appropriate cutting conditions is of importance to prevent the occurrence of chatter. In addition, vibration of the spindle tool also affects the surface quality, which implies the surface precision can be controlled by monitoring the vibration of the spindle tool. Based on this concept, this study was aimed to investigate the influence of the machining conditions on the surface roughness and the vibration of the spindle tool. To this end, a series of machining tests were conducted on aluminum alloy. In tests, the vibration of the spindle tool was measured by using the acceleration sensors. The surface roughness of the machined parts was examined using white light interferometer. The response surface methodology (RSM) was employed to establish the mathematical models for predicting surface finish and tool vibration, respectively. The correlation between the surface roughness and spindle tool vibration was also analyzed by ANOVA analysis. According to the machining tests, machined surface with or without chattering was marked on the lobes diagram as the verification of the machining conditions. Using multivariable regression analysis, the mathematical models for predicting the surface roughness and tool vibrations were developed based on the machining parameters, cutting depth (a), feed rate (f) and spindle speed (s). The predicted roughness is shown to agree well with the measured roughness, an average percentage of errors of 10%. The average percentage of errors of the tool vibrations between the measurements and the predictions of mathematical model is about 7.39%. In addition, the tool vibration under various machining conditions has been found to have a positive influence on the surface roughness (r=0.78). As a conclusion from current results, the mathematical models were successfully developed for the predictions of the surface roughness and vibration level of the spindle tool under different cutting condition, which can help to select appropriate cutting parameters and to monitor the machining conditions to achieve high surface quality in milling operation.Keywords: machining parameters, machining stability, regression analysis, surface roughness
Procedia PDF Downloads 231709 Slowness in Architecture: The Pace of Human Engagement with the Built Environment
Authors: Jaidev Tripathy
Abstract:
A human generation’s lifestyle, behaviors, habits, and actions are governed heavily by homogenous mindsets. But the current scenario is witnessing a rapid gap in this homogeneity as a result of an intervention, or rather, the dominance of the digital revolution in the human lifestyle. The current mindset for mass production, employment, multi-tasking, rapid involvement, and stiff competition to stay above the rest has led to a major shift in human consciousness. Architecture, as an entity, is being perceived differently. The screens are replacing the skies. The pace at which operation and evolution is taking place has increased. It is paradoxical, that time seems to be moving faster despite the intention to save time. Parallelly, there is an evident shift in architectural typologies spanning across different generations. The architecture of today is now seems influenced heavily from here and there. Mass production of buildings and over-exploitation of resources giving shape to uninspiring algorithmic designs, ambiguously catering to multiple user groups, has become a prevalent theme. Borrow-and-steal replaces influence, and the diminishing depth in today’s designs reflects a lack of understanding and connection. The digitally dominated world, perceived as an aid to connect and network, is making humans less capable of real-life interactions and understanding. It is not wrong, but it doesn’t seem right either. The engagement level between human beings and the built environment is a concern which surfaces. This leads to a question: Does human engagement drive architecture, or does architecture drive human engagement? This paper attempts to relook at architecture's capacity and its relativity with pace to influence the conscious decisions of a human being. Secondary research, supported with case examples, helps in understanding the translation of human engagement with the built environment through physicality of architecture. The procedure, or theme, is pace and the role of slowness in the context of human behaviors, thus bridging the widening gap between the human race and the architecture themselves give shape to, avoiding a possible future dystopian world.Keywords: junkspace, pace, perception, slowness
Procedia PDF Downloads 109708 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 150707 Investigating Constructions and Operation of Internal Combustion Engine Water Pumps
Authors: Michał Gęca, Konrad Pietrykowski, Grzegorz Barański
Abstract:
The water pump in the compression-ignition internal combustion engine transports a hot coolant along a system of ducts from the engine block to the radiator where coolant temperature is lowered. This part needs to maintain a constant volumetric flow rate. Its power should be regulated to avoid a significant drop in pressure if a coolant flow decreases. The internal combustion engine cooling system uses centrifugal pumps for suction. The paper investigates 4 constructions of engine pumps. The pumps are from diesel engine of a maximum power of 75 kW. Each of them has a different rotor shape, diameter and width. The test stand was created and the geometry inside the all 4 engine blocks was mapped. For a given pump speed on the inverter of the electric engine motor, the valve position was changed and volumetric flow rate, pressure, and power were recorded. Pump speed was regulated from 1200 RPM to 7000 RPM every 300 RPM. The volumetric flow rates and pressure drops for the pump speeds and efficiencies were specified. Accordingly, the operations of each pump were mapped. Our research was to select a pump for the aircraft compression-ignition engine. There was calculated a pressure drop at a given flow on the block and radiator of the designed aircraft engine. The water pump should be lightweight and have a low power demand. This fact shall affect the shape of a rotor and bearings. The pump volumetric flow rate was assumed as 3 kg/s (previous AVL BOOST research model) where the temperature difference was 5°C between the inlet (90°C) and outlet (95°C). Increasing pump speed above the boundary flow power defined by pressure and volumetric flow rate does not increase it but pump efficiency decreases. The maximum total pump efficiency (PCC) is 45-50%. When the pump is driven by low speeds with a 90% closed valve, its overall efficiency drops to 15-20%. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: aircraft engine, diesel engine, flow, water pump
Procedia PDF Downloads 252706 Evaluation of Double Displacement Process via Gas Dumpflood from Multiple Gas Reservoirs
Authors: B. Rakjarit, S. Athichanagorn
Abstract:
Double displacement process is a method in which gas is injected at an updip well to displace the oil bypassed by waterflooding operation from downdip water injector. As gas injection is costly and a large amount of gas is needed, gas dump-flood from multiple gas reservoirs is an attractive alternative. The objective of this paper is to demonstrate the benefits of the novel approach of double displacement process via gas dump-flood from multiple gas reservoirs. A reservoir simulation model consisting of a dipping oil reservoir and several underlying layered gas reservoirs was constructed in order to investigate the performance of the proposed method. Initially, water was injected via the downdip well to displace oil towards the producer located updip. When the water cut at the producer became high, the updip well was shut in and perforated in the gas zones in order to dump gas into the oil reservoir. At this point, the downdip well was open for production. In order to optimize oil recovery, oil production and water injection rates and perforation strategy on the gas reservoirs were investigated for different numbers of gas reservoirs having various depths and thicknesses. Gas dump-flood from multiple gas reservoirs can help increase the oil recovery after implementation of waterflooding upto 10%. Although the amount of additional oil recovery is slightly lower than the one obtained in conventional double displacement process, the proposed process requires a small completion cost of the gas zones and no operating cost while the conventional method incurs high capital investment in gas compression facility and high-pressure gas pipeline and additional operating cost. From the simulation study, oil recovery can be optimized by producing oil at a suitable rate and perforating the gas zones with the right strategy which depends on depths, thicknesses and number of the gas reservoirs. Conventional double displacement process has been studied and successfully implemented in many fields around the world. However, the method of dumping gas into the oil reservoir instead of injecting it from surface during the second displacement process has never been studied. The study of this novel approach will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost.Keywords: gas dump-flood, multi-gas layers, double displacement process, reservoir simulation
Procedia PDF Downloads 408705 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques
Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan
Abstract:
A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle
Procedia PDF Downloads 319704 System Dietadhoc® - A Fusion of Human-Centred Design and Agile Development for the Explainability of AI Techniques Based on Nutritional and Clinical Data
Authors: Michelangelo Sofo, Giuseppe Labianca
Abstract:
In recent years, the scientific community's interest in the exploratory analysis of biomedical data has increased exponentially. Considering the field of research of nutritional biologists, the curative process, based on the analysis of clinical data, is a very delicate operation due to the fact that there are multiple solutions for the management of pathologies in the food sector (for example can recall intolerances and allergies, management of cholesterol metabolism, diabetic pathologies, arterial hypertension, up to obesity and breathing and sleep problems). In this regard, in this research work a system was created capable of evaluating various dietary regimes for specific patient pathologies. The system is founded on a mathematical-numerical model and has been created tailored for the real working needs of an expert in human nutrition using the human-centered design (ISO 9241-210), therefore it is in step with continuous scientific progress in the field and evolves through the experience of managed clinical cases (machine learning process). DietAdhoc® is a decision support system nutrition specialists for patients of both sexes (from 18 years of age) developed with an agile methodology. Its task consists in drawing up the biomedical and clinical profile of the specific patient by applying two algorithmic optimization approaches on nutritional data and a symbolic solution, obtained by transforming the relational database underlying the system into a deductive database. For all three solution approaches, particular emphasis has been given to the explainability of the suggested clinical decisions through flexible and customizable user interfaces. Furthermore, the system has multiple software modules based on time series and visual analytics techniques that allow to evaluate the complete picture of the situation and the evolution of the diet assigned for specific pathologies.Keywords: medical decision support, physiological data extraction, data driven diagnosis, human centered AI, symbiotic AI paradigm
Procedia PDF Downloads 23703 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation
Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau
Abstract:
In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa
Procedia PDF Downloads 156702 Effect of 16 Weeks Walking with Different Dosages on Psychosocial Function Related Quality of Life among 60 to 75 Years Old Men
Authors: Mohammad Ehsani, Elham Karimi, Hashem Koozechian
Abstract:
Aim: The purpose of current semi-experimental study was a survey on effect of 16 week walking on psychosocial function related quality of life among 60 to 75 years old men. Methodology: For this reason, short from of health – related quality of life questionnaire (SF – 36) and Geriatric Depression Scale (GDS) had been distributed to the subjects at 2 times of pre – test and posttest. Statistical sample of current study was 60 to 75 years old men who placed at Kahrizak house and assessed by considering physically and medical background. Also factors of entrance to the intervention like age range, have satisfaction and have intent to participating in walking program, lack of having diabetic, cardiovascular, Parkinsonism diseases and postural, neurological, musculoskeletal disorders, lack of having clinical background like visual disorders or disordering on equilibrium system, lack of motor limitation, foot print disorders, having surgery and mental health had been determined and assessed. Finally after primary studies, 80 persons selected and categorized accidentally to the 3 experimental group (1, 2, 3 sessions per week, 30 min walking with moderate intension at every sessions) and one control group (without physical activity in period of 16 weeks). Data analysed by employing ANOVA, Pearson coefficient and Scheffe Post – Hoc tests at the significance level of p < 0.05. Results: Results showed that psychosocial function of men with 60 to 75 years old increase by influence of 16 week walking and increase of exercise sessions lead to more effectiveness of walking. Also there was no significant difference between psychosocial function of subjects within 1 session and 3 sessions experimental groups (p > 0.05). Conclusion: On the basis of results, we can say that doing regular walking with efficient and standard dosage for elderly people, can increase their quality of life. Furthermore, designing and action operation regular walking program for elderly men on the basis of special, logical and systematic pattern under the supervision of aware coaches have been recommended on the basis of results.Keywords: walking, quality of life, psychosocial function, elders
Procedia PDF Downloads 590701 A Methodology for Developing New Technology Ideas to Avoid Patent Infringement: F-Term Based Patent Analysis
Authors: Kisik Song, Sungjoo Lee
Abstract:
With the growing importance of intangible assets recently, the impact of patent infringement on the business of a company has become more evident. Accordingly, it is essential for firms to estimate the risk of patent infringement risk before developing a technology and create new technology ideas to avoid the risk. Recognizing the needs, several attempts have been made to help develop new technology opportunities and most of them have focused on identifying emerging vacant technologies from patent analysis. In these studies, the IPC (International Patent Classification) system or keywords from text-mining application to patent documents was generally used to define vacant technologies. Unlike those studies, this study adopted F-term, which classifies patent documents according to the technical features of the inventions described in them. Since the technical features are analyzed by various perspectives by F-term, F-term provides more detailed information about technologies compared to IPC while more systematic information compared to keywords. Therefore, if well utilized, it can be a useful guideline to create a new technology idea. Recognizing the potential of F-term, this paper aims to suggest a novel approach to developing new technology ideas to avoid patent infringement based on F-term. For this purpose, we firstly collected data about F-term and then applied text-mining to the descriptions about classification criteria and attributes. From the text-mining results, we could identify other technologies with similar technical features of the existing one, the patented technology. Finally, we compare the technologies and extract the technical features that are commonly used in other technologies but have not been used in the existing one. These features are presented in terms of “purpose”, “function”, “structure”, “material”, “method”, “processing and operation procedure” and “control means” and so are useful for creating new technology ideas that help avoid infringing patent rights of other companies. Theoretically, this is one of the earliest attempts to adopt F-term to patent analysis; the proposed methodology can show how to best take advantage of F-term with the wealth of technical information. In practice, the proposed methodology can be valuable in the ideation process for successful product and service innovation without infringing the patents of other companies.Keywords: patent infringement, new technology ideas, patent analysis, F-term
Procedia PDF Downloads 269700 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS
Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan
Abstract:
Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.Keywords: bearing force, frictional force, finite element analysis, ANSYS
Procedia PDF Downloads 334699 Exploratory Analysis and Development of Sustainable Lean Six Sigma Methodologies Integration for Effective Operation and Risk Mitigation in Manufacturing Sectors
Authors: Chukwumeka Daniel Ezeliora
Abstract:
The Nigerian manufacturing sector plays a pivotal role in the country's economic growth and development. However, it faces numerous challenges, including operational inefficiencies and inherent risks that hinder its sustainable growth. This research aims to address these challenges by exploring the integration of Lean and Six Sigma methodologies into the manufacturing processes, ultimately enhancing operational effectiveness and risk mitigation. The core of this research involves the development of a sustainable Lean Six Sigma framework tailored to the specific needs and challenges of Nigeria's manufacturing environment. This framework aims to streamline processes, reduce waste, improve product quality, and enhance overall operational efficiency. It incorporates principles of sustainability to ensure that the proposed methodologies align with environmental and social responsibility goals. To validate the effectiveness of the integrated Lean Six Sigma approach, case studies and real-world applications within select manufacturing companies in Nigeria will be conducted. Data were collected to measure the impact of the integration on key performance indicators, such as production efficiency, defect reduction, and risk mitigation. The findings from this research provide valuable insights and practical recommendations for selected manufacturing companies in South East Nigeria. By adopting sustainable Lean Six Sigma methodologies, these organizations can optimize their operations, reduce operational risks, improve product quality, and enhance their competitiveness in the global market. In conclusion, this research aims to bridge the gap between theory and practice by developing a comprehensive framework for the integration of Lean and Six Sigma methodologies in Nigeria's manufacturing sector. This integration is envisioned to contribute significantly to the sector's sustainable growth, improved operational efficiency, and effective risk mitigation strategies, ultimately benefiting the Nigerian economy as a whole.Keywords: lean six sigma, manufacturing, risk mitigation, sustainability, operational efficiency
Procedia PDF Downloads 207698 Life-Cycle Cost and Life-Cycle Assessment of Photovoltaic/Thermal Systems (PV/T) in Swedish Single-Family Houses
Authors: Arefeh Hesaraki
Abstract:
The application of photovoltaic-thermal hybrids (PVT), which delivers both electricity and heat simultaneously from the same system, has become more popular during the past few years. This study addresses techno-economic and environmental impacts assessment of photovoltaic/thermal systems combined with a ground-source heat pump (GSHP) for three single-family houses located in Stockholm, Sweden. Three case studies were: (1) A renovated building built in 1936, (2) A renovated building built in 1973, and (3) A new building built-in 2013. Two simulation programs of SimaPro 9.1 and IDA Indoor Climate and Energy 4.8 (IDA ICE) were applied to analyze environmental impacts and energy usage, respectively. The cost-effectiveness of the system was evaluated using net present value (NPV), internal rate of return (IRR), and discounted payback time (DPBT) methods. In addition to cost payback time, the studied PVT system was evaluated using the energy payback time (EPBT) method. EPBT presents the time that is needed for the installed system to generate the same amount of energy which was utilized during the whole lifecycle (fabrication, installation, transportation, and end-of-life) of the system itself. Energy calculation by IDA ICE showed that a 5 m² PVT was sufficient to create a balance between the maximum heat production and the domestic hot water consumption during the summer months for all three case studies. The techno-economic analysis revealed that combining a 5 m² PVT with GSHP in the second case study possess the smallest DPBT and the highest NPV and IRR among the three case studies. It means that DPBTs (IRR) were 10.8 years (6%), 12.6 years (4%), and 13.8 years (3%) for the second, first, and the third case study, respectively. Moreover, environmental assessment of embodied energy during cradle- to- grave life cycle of the studied PVT, including fabrication, delivery of energy and raw materials, manufacture process, installation, transportation, operation phase, and end of life, revealed approximately two years of EPBT in all cases.Keywords: life-cycle cost, life-cycle assessment, photovoltaic/thermal, IDA ICE, net present value
Procedia PDF Downloads 115697 Intermittent Effect of Coupled Thermal and Acoustic Sources on Combustion: A Spatial Perspective
Authors: Pallavi Gajjar, Vinayak Malhotra
Abstract:
Rockets have been known to have played a predominant role in spacecraft propulsion. The quintessential aspect of combustion-related requirements of a rocket engine is the minimization of the surrounding risks/hazards. Over time, it has become imperative to understand the combustion rate variation in presence of external energy source(s). Rocket propulsion represents a special domain of chemical propulsion assisted by high speed flows in presence of acoustics and thermal source(s). Jet noise leads to a significant loss of resources and every year a huge amount of financial aid is spent to prevent it. External heat source(s) induce high possibility of fire risk/hazards which can sufficiently endanger the operation of a space vehicle. Appreciable work had been done with justifiable simplification and emphasis on the linear variation of external energy source(s), which yields good physical insight but does not cater to accurate predictions. Present work experimentally attempts to understand the correlation between inter-energy conversions with the non-linear placement of external energy source(s). The work is motivated by the need to have better fire safety and enhanced combustion. The specific objectives of the work are a) To interpret the related energy transfer for combustion in presence of alternate external energy source(s) viz., thermal and acoustic, b) To fundamentally understand the role of key controlling parameters viz., separation distance, the number of the source(s), selected configurations and their non-linear variation to resemble real-life cases. An experimental setup was prepared using incense sticks as potential fuel and paraffin wax candles as the external energy source(s). The acoustics was generated using frequency generator, and source(s) were placed at selected locations. Non-equidistant parametric experimentation was carried out, and the effects were noted on regression rate changes. The results are expected to be very helpful in offering a new perspective into futuristic rocket designs and safety.Keywords: combustion, acoustic energy, external energy sources, regression rate
Procedia PDF Downloads 141696 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 266695 Strategies For Management Of Massive Intraoperative Airway Haemorrhage Complicating Surgical Pulmonary Embolectomy
Authors: Nicholas Bayfield, Liam Bibo, Kaushelandra Rathore, Lucas Sanders, Mark Newman
Abstract:
INTRODUCTION: Surgical pulmonary embolectomy is an established therapy for acute pulmonary embolism causing right heart dysfunction and haemodynamic instability. Massive intraoperative airway haemorrhage is a rare complication of pulmonary embolectomy. We present our institutional experience with massive airway haemorrhage complicating pulmonary embolectomy and discuss optimal therapeutic strategies. METHODS: A retrospective review of emergent surgical pulmonary embolectomy patients was undertaken. Cases complicated by massive intra-operative airway haemorrhage were identified. Intra- and peri-operative management strategies were analysed and discussed. RESULTS: Of 76 patients undergoing emergent or salvage pulmonary embolectomy, three cases (3.9%) of massive intraoperative airway haemorrhage were identified. Haemorrhage always began on weaning from cardiopulmonary bypass. Successful management strategies involved intraoperative isolation of the side of bleeding, occluding the affected airway with an endobronchial blocker, institution of veno-arterial (VA) extracorporeal membrane oxygenation (ECMO) and reversal of anticoagulation. Running the ECMO without heparinisation allows coagulation to occur. Airway haemorrhage was controlled within 24 hours of operation in all patients, allowing re-institution of dual lung ventilation and decannulation from ECMO. One case in which positive end-expiratory airway pressure was trialled initially was complicated by air embolism. Although airway haemorrhage was controlled successfully in all cases, all patients died in-hospital for reasons unrelated to the airway haemorrhage. CONCLUSION: Massive intraoperative airway haemorrhage during pulmonary embolectomy is a rare complication with potentially catastrophic outcomes. Re-perfusion alveolar and capillary injury is the likely aetiology. With a systematic approach to management, airway haemorrhage can be well controlled intra-operatively and often resolves within 24 hours. Stopping blood flow to the pulmonary arteries and support of oxygenation by the institution of VA ECMO is important. This management has been successful in our 3 cases.Keywords: pulmonary embolectomy, cardiopulmonary bypass, cardiac surgery, pulmonary embolism
Procedia PDF Downloads 176694 Dynamic Analysis of Commodity Price Fluctuation and Fiscal Management in Sub-Saharan Africa
Authors: Abidemi C. Adegboye, Nosakhare Ikponmwosa, Rogers A. Akinsokeji
Abstract:
For many resource-rich developing countries, fiscal policy has become a key tool used for short-run fiscal management since it is considered as playing a critical role in injecting part of resource rents into the economies. However, given its instability, reliance on revenue from commodity exports renders fiscal management, budgetary planning and the efficient use of public resources difficult. In this study, the linkage between commodity prices and fiscal operations among a sample of commodity-exporting countries in sub-Saharan Africa (SSA) is investigated. The main question is whether commodity price fluctuations affects the effectiveness of fiscal policy as a macroeconomic stabilization tool in these countries. Fiscal management effectiveness is considered as the ability of fiscal policy to react countercyclically to output gaps in the economy. Fiscal policy is measured as the ratio of fiscal deficit to GDP and the ratio of government spending to GDP, output gap is measured as a Hodrick-Prescott filter of output growth for each country, while commodity prices are associated with each country based on its main export commodity. Given the dynamic nature of fiscal policy effects on the economy overtime, a dynamic framework is devised for the empirical analysis. The panel cointegration and error correction methodology is used to explain the relationships. In particular, the study employs the panel ECM technique to trace short-term effects of commodity prices on fiscal management and also uses the fully modified OLS (FMOLS) technique to determine the long run relationships. These procedures provide sufficient estimation of the dynamic effects of commodity prices on fiscal policy. Data used cover the period 1992 to 2016 for 11 SSA countries. The study finds that the elasticity of the fiscal policy measures with respect to the output gap is significant and positive, suggesting that fiscal policy is actually procyclical among the countries in the sample. This implies that fiscal management for these countries follows the trend of economic performance. Moreover, it is found that fiscal policy has not performed well in delivering macroeconomic stabilization for these countries. The difficulty in applying fiscal stabilization measures is attributable to the unstable revenue inflows due to the highly volatile nature of commodity prices in the international market. For commodity-exporting countries in SSA to improve fiscal management, therefore, fiscal planning should be largely decoupled from commodity revenues, domestic revenue bases must be improved, and longer period perspectives in fiscal policy management are the critical suggestions in this study.Keywords: commodity prices, ECM, fiscal policy, fiscal procyclicality, fully modified OLS, sub-saharan africa
Procedia PDF Downloads 164693 Preparation and Characterization of Biosorbent from Cactus (Opuntia ficus-indica) cladodes and its Application for Dye Removal from Aqueous Solution
Authors: Manisha Choudhary, Sudarsan Neogi
Abstract:
Malachite green (MG), an organic basic dye, has been widely used for the dyeing purpose, as well as a fungicide and antiseptic in aquaculture industry to control fish parasites and disease. However, MG has now turned out to be an extremely controversial compound due to its adverse impact on living beings. Due to high toxicity, proper treatment of wastewater containing MG is utmost important. Among different available technologies, adsorption process is one of the most efficient and cost-effective treatment method due to its simplicity of design, ease of operation and regeneration of used materials. Nonetheless, commercial activated carbon is expensive leading the researchers to focus on utilizing natural resources. In the present work, a species of cactus, Opuntia ficus-indica (OFI), was used to develop a highly efficient, low-cost powdered activated carbon by chemical activation using NaOH. The biosorbent was characterized by Fourier-transform infrared spectroscopy, field emission scanning electron microscope, energy-dispersive X-ray spectroscopy, Brunauer–Emmett–Teller (BET) and X-ray diffraction analysis. Batch adsorption studies were performed to remove MG from an aqueous solution as a function of contact time, initial solution pH, initial dye concentration, biosorbent dosages, the presence of salt and temperature. By increasing the initial dye concentration from 100 to 500 mg/l, adsorption capacity increased from 165.45 to 831.58 mg/g. The adsorption kinetics followed the pseudo-second-order model and the chemisorption mechanisms were revealed. The electrostatic attractions and chemical interactions were observed between amino and hydroxyl groups of the biosorbent and amine groups of the dye. The adsorption was solely controlled by film diffusion. Different isotherm models were used to fit the adsorption data. The excellent recovery of adsorption efficiency after the regeneration of biosorbent indicated the high potential of this adsorbent to remove MG from aqueous solution and an excellent cost-effective biosorbent for wide application in wastewater treatment.Keywords: adsorption, biosorbent, cactus, malachite green
Procedia PDF Downloads 374692 The Analysis of Increment of Road Traffic Accidents in Libya: Case Study City of Tripoli
Authors: Fares Elturki, Shaban Ismael Albrka Ali Zangena, H. A. M. Yahia
Abstract:
Safety is an important consideration in the design and operation of streets and highways. Traffic and highway engineers working with law enforcement officials are constantly seeking for better methods to ensure safety for motorists and pedestrians. Also, a highway safety improvement process involves planning, implementation, and evaluation. The planning process requires that engineers collect and maintain traffic safety data, identify the hazards location, conduct studies and establish project priorities. Unfortunately, in Libya, the increase in demand for private transportation in recent years, due to poor or lack of public transportation led to some traffic problems especially in the capital (Tripoli). Also, the growth of private transportation has significant influences on the society regarding road traffic accidents (RTAs). This study investigates the most critical factors affect RTAs in Tripoli the capital city of Libya. Four main classifications were chosen to build the questionnaire, namely; human factors, road factors, vehicle factors and environmental factors. Moreover, a quantitative method was used to collect the data from the field, the targeted sample size 400 respondents include; drivers, pedestrian and passengers and relative importance index (RII) were used to rank the factors of one group and between all groups. The results show that the human factors have the most significant impacts compared with other factors. Also, 84% of respondents considered the over speeding as the most significant factor cusses of RTAs while 81% considered the disobedience to driving regulations as the second most influential factor in human factors. Also, the results showed that poor brakes or brake failure factor a great impact on the RTAs among the vehicle factors with nearly 74%, while 79% categorized poor or no street lighting factor as one of the most effective factors on RTAs in road factors and third effecting factor concerning all factors. The environmental factors have the slights influences compared with other factors.Keywords: road traffic accidents, Libya, vehicle factors, human factors, relative importance index
Procedia PDF Downloads 279691 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 151690 Some Quality Parameters of Selected Maize Hybrids from Serbia for the Production of Starch, Bioethanol and Animal Feed
Authors: Marija Milašinović-Šeremešić, Valentina Semenčenko, Milica Radosavljević, Dušanka Terzić, Ljiljana Mojović, Ljubica Dokić
Abstract:
Maize (Zea mays L.) is one of the most important cereal crops, and as such, one of the most significant naturally renewable carbohydrate raw materials for the production of energy and multitude of different products. The main goal of the present study was to investigate a suitability of selected maize hybrids of different genetic background produced in Maize Research Institute ‘Zemun Polje’, Belgrade, Serbia, for starch, bioethanol and animal feed production. All the hybrids are commercial and their detailed characterization is important for the expansion of their different uses. The starches were isolated by using a 100-g laboratory maize wet-milling procedure. Hydrolysis experiments were done in two steps (liquefaction with Termamyl SC, and saccharification with SAN Extra L). Starch hydrolysates obtained by the two-step hydrolysis of the corn flour starch were subjected to fermentation by S. cerevisiae var. ellipsoideus under semi-anaerobic conditions. The digestibility based on enzymatic solubility was performed by the Aufréré method. All investigated ZP maize hybrids had very different physical characteristics and chemical composition which could allow various possibilities of their use. The amount of hard (vitreous) and soft (floury) endosperm in kernel is considered one of the most important parameters that can influence the starch and bioethanol yields. Hybrids with a lower test weight and density and a greater proportion of soft endosperm fraction had a higher yield, recovery and purity of starch. Among the chemical composition parameters only starch content significantly affected the starch yield. Starch yields of studied maize hybrids ranged from 58.8% in ZP 633 to 69.0% in ZP 808. The lowest bioethanol yield of 7.25% w/w was obtained for hybrid ZP 611k and the highest by hybrid ZP 434 (8.96% w/w). A very significant correlation was determined between kernel starch content and the bioethanol yield, as well as volumetric productivity (48h) (r=0.66). Obtained results showed that the NDF, ADF and ADL contents in the whole maize plant of the observed ZP maize hybrids varied from 40.0% to 60.1%, 18.6% to 32.1%, and 1.4% to 3.1%, respectively. The difference in the digestibility of the dry matter of the whole plant among hybrids (ZP 735 and ZP 560) amounted to 18.1%. Moreover, the differences in the contents of the lignocelluloses fraction affected the differences in dry matter digestibility. From the results it can be concluded that genetic background of the selected maize hybrids plays an important part in estimation of the technological value of maize hybrids for various purposes. Obtained results are of an exceptional importance for the breeding programs and selection of potentially most suitable maize hybrids for starch, bioethanol and animal feed production.Keywords: bioethanol, biomass quality, maize, starch
Procedia PDF Downloads 222689 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics
Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty
Abstract:
Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC
Procedia PDF Downloads 222688 Indirect Intergranular Slip Transfer Modeling Through Continuum Dislocation Dynamics
Authors: A. Kalaei, A. H. W. Ngan
Abstract:
In this study, a mesoscopic continuum dislocation dynamics (CDD) approach is applied to simulate the intergranular slip transfer. The CDD scheme applies an efficient kinematics equation to model the evolution of the “all-dislocation density,” which is the line-length of dislocations of each character per unit volume. As the consideration of every dislocation line can be a limiter for the simulation of slip transfer in large scales with a large quantity of participating dislocations, a coarse-grained, extensive description of dislocations in terms of their density is utilized to resolve the effect of collective motion of dislocation lines. For dynamics closure, namely, to obtain the dislocation velocity from a velocity law involving the effective glide stress, mutual elastic interaction of dislocations is calculated using Mura’s equation after singularity removal at the core of dislocation lines. The developed scheme for slip transfer can therefore resolve the effects of the elastic interaction and pile-up of dislocations, which are important physics omitted in coarser models like crystal plasticity finite element methods (CPFEMs). Also, the length and timescales of the simulationareconsiderably larger than those in molecular dynamics (MD) and discrete dislocation dynamics (DDD) models. The present work successfully simulates that, as dislocation density piles up in front of a grain boundary, the elastic stress on the other side increases, leading to dislocation nucleation and stress relaxation when the local glide stress exceeds the operation stress of dislocation sources seeded on the other side of the grain boundary. More importantly, the simulation verifiesa phenomenological misorientation factor often used by experimentalists, namely, the ease of slip transfer increases with the product of the cosines of misorientation angles of slip-plane normals and slip directions on either side of the grain boundary. Furthermore, to investigate the effects of the critical stress-intensity factor of the grain boundary, dislocation density sources are seeded at different distances from the grain boundary, and the critical applied stress to make slip transfer happen is studied.Keywords: grain boundary, dislocation dynamics, slip transfer, elastic stress
Procedia PDF Downloads 123687 Effect of Three Desensitizers on Dentinal Tubule Occlusion and Bond Strength of Dentin Adhesives
Authors: Zou Xuan, Liu Hongchen
Abstract:
The ideal dentin desensitizing agent should not only have good biological safety, simple clinical operation mode, the superior treatment effect, but also should have a durable effect to resist the oral environmental temperature change and oral mechanical abrasion, so as to achieve a persistent desensitization effect. Also, when using desensitizing agent to prevent the post-operative hypersensitivity, we should not only prevent it from affecting crowns’ retention, but must understand its effects on bond strength of dentin adhesives. There are various of desensitizers and dentin adhesives in clinical treatment. They have different chemical or physical properties. Whether the use of desensitizing agent would affect the bond strength of dentin adhesives still need further research. In this in vitro study, we built the hypersensitive dentin model and post-operative dentin model, to evaluate the sealing effects and durability on exposed tubule by three different dentin desensitizers and to evaluate the sealing effects and the bond strength of dentin adhesives after using three different dentin desensitizers on post-operative dentin. The result of this study could provide some important references for clinical use of dentin desensitizing agent. 1. As to the three desensitizers, the hypersensitive dentin model was built to evaluate their sealing effects on exposed tubule by SEM observation and dentin permeability analysis. All of them could significantly reduce the dentin permeability. 2. Test specimens of three groups treated by desensitizers were subjected to aging treatment with 5000 times thermal cycling and toothbrush abrasion, and then dentin permeability was measured to evaluate the sealing durability of these three desensitizers on exposed tubule. The sealing durability of three groups were different. 3. The post-operative dentin model was built to evaluate the sealing effects of the three desensitizers on post-operative dentin by SEM and methylene blue. All of three desensitizers could reduce the dentin permeability significantly. 4. The influences of three desensitizers on the bonding efficiency of total-etch and self-etch adhesives were evaluated with the micro-tensile bond strength study and bond interface morphology observation. The dentin bond strength for Green or group was significantly lower than the other two groups (P<0.05).Keywords: dentin, desensitizer, dentin permeability, thermal cycling, micro-tensile bond strength
Procedia PDF Downloads 393686 Digitalization, Economic Growth and Financial Sector Development in Africa
Authors: Abdul Ganiyu Iddrisu
Abstract:
Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.Keywords: digitalization, economic growth, financial sector development, Africa
Procedia PDF Downloads 103