Search results for: free-surface flow simulation
754 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods
Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke
Abstract:
Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing
Procedia PDF Downloads 129753 Integrating Circular Economy Framework into Life Cycle Analysis: An Exploratory Study Applied to Geothermal Power Generation Technologies
Authors: Jingyi Li, Laurence Stamford, Alejandro Gallego-Schmid
Abstract:
Renewable electricity has become an indispensable contributor to achieving net-zero by the mid-century to tackle climate change. Unlike solar, wind, or hydro, geothermal was stagnant in its electricity production development for decades. However, with the significant breakthrough made in recent years, especially the implementation of enhanced geothermal systems (EGS) in various regions globally, geothermal electricity could play a pivotal role in alleviating greenhouse gas emissions. Life cycle assessment has been applied to analyze specific geothermal power generation technologies, which proposed suggestions to optimize its environmental performance. For instance, selecting a high heat gradient region enables a higher flow rate from the production well and extends the technical lifespan. Although such process-level improvements have been made, the significance of geothermal power generation technologies so far has not explicitly displayed its competitiveness on a broader horizon. Therefore, this review-based study integrates a circular economy framework into life cycle assessment, clarifying the underlying added values for geothermal power plants to complete the sustainability profile. The derived results have provided an enlarged platform to discuss geothermal power generation technologies: (i) recover the heat and electricity from the process to reduce the fossil fuel requirements; (ii) recycle the construction materials, such as copper, steel, and aluminum for future projects; (iii) extract the lithium ions from geothermal brine and make geothermal reservoir become a potential supplier of the lithium battery industry; (iv) repurpose the abandoned oil and gas wells to build geothermal power plants; (v) integrate geothermal energy with other available renewable energies (e.g., solar and wind) to provide heat and electricity as a hybrid system at different weather; (vi) rethink the fluids used in stimulation process (EGS only), replace water with CO2 to achieve negative emissions from the system. These results provided a new perspective to the researchers, investors, and policymakers to rethink the role of geothermal in the energy supply network.Keywords: climate, renewable energy, R strategies, sustainability
Procedia PDF Downloads 137752 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation
Authors: Kausar Harun, Ahmad Azmin Mohamad
Abstract:
Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles
Procedia PDF Downloads 309751 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 144750 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives
Authors: Chao Wang
Abstract:
Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.Keywords: sebum layer, topical and similarity, interactive technology, image narrative
Procedia PDF Downloads 389749 Utilising Indigenous Knowledge to Design Dykes in Malawi
Authors: Martin Kleynhans, Margot Soler, Gavin Quibell
Abstract:
Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi
Procedia PDF Downloads 279748 Development of Three-Dimensional Groundwater Model for Al-Corridor Well Field, Amman–Zarqa Basin
Authors: Moayyad Shawaqfah, Ibtehal Alqdah, Amjad Adaileh
Abstract:
Coridoor area (400 km2) lies to the north – east of Amman (60 km). It lies between 285-305 E longitude and 165-185 N latitude (according to Palestine Grid). It been subjected to exploitation of groundwater from new eleven wells since the 1999 with a total discharge of 11 MCM in addition to the previous discharge rate from the well field 14.7 MCM. Consequently, the aquifer balance is disturbed and a major decline in water level. Therefore, suitable groundwater resources management is required to overcome the problems of over pumping and its effect on groundwater quality. Three–dimensional groundwater flow model Processing Modeflow for Windows Pro (PMWIN PRO, 2003) has been used in order to calculate the groundwater budget, aquifer characteristics, and to predict the aquifer response under different stresses for the next 20 years (2035). The model was calibrated for steady state conditions by trial and error calibration. The calibration was performed by matching observed and calculated initial heads for year 2001. Drawdown data for period 2001-2010 were used to calibrate transient model by matching calculated with observed one, after that, the transient model was validated by using the drawdown data for the period 2011-2014. The hydraulic conductivities of the Basalt- A7/B2 aquifer System are ranging between 1.0 and 8.0 m/day. The low conductivity value was found at the north-west and south-western parts of the study area, the high conductivity value was found at north-western corner of the study area and the average storage coefficient is about 0.025. The water balance for the Basalt and B2/A7 formation at steady state condition with a discrepancy of 0.003%. The major inflows come from Jebal Al Arab through the basalt and through the limestone aquifer (B2/A7 12.28 MCMY aquifer and from excess rainfall is about 0.68 MCM/a. While the major outflows from the Basalt-B2/A7 aquifer system are toward Azraq basin with about 5.03 MCMY and leakage to A1/6 aquitard with 7.89 MCMY. Four scenarios have been performed to predict aquifer system responses under different conditions. Scenario no.2 was found to be the best one which indicates that the reduction the abstraction rates by 50% of current withdrawal rate (25.08 MCMY) to 12.54 MCMY. The maximum drawdowns were decreased to reach about, 7.67 and 8.38m in the years 2025 and 2035 respectively.Keywords: Amman/Zarqa Basin, Jordan, groundwater management, groundwater modeling, modflow
Procedia PDF Downloads 216747 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine
Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi
Abstract:
Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.Keywords: diesel fuel, CFD, evaporation, multiphase
Procedia PDF Downloads 343746 Characterization of WNK2 Role on Glioma Cells Vesicular Traffic
Authors: Viviane A. O. Silva, Angela M. Costa, Glaucia N. M. Hajj, Ana Preto, Aline Tansini, Martin Roffé, Peter Jordan, Rui M. Reis
Abstract:
Autophagy is a recycling and degradative system suggested to be a major cell death pathway in cancer cells. Autophagy pathway is interconnected with the endocytosis pathways sharing the same ultimate lysosomal destination. Lysosomes are crucial regulators of cell homeostasis, responsible to downregulate receptor signalling and turnover. It seems highly likely that derailed endocytosis can make major contributions to several hallmarks of cancer. WNK2, a member of the WNK (with-no-lysine [K]) subfamily of protein kinases, had been found downregulated by its promoter hypermethylation, and has been proposed to act as a specific tumour-suppressor gene in brain tumors. Although some contradictory studies indicated WNK2 as an autophagy modulator, its role in cancer cell death is largely unknown. There is also growing evidence for additional roles of WNK kinases in vesicular traffic. Aim: To evaluate the role of WNK2 in autophagy and endocytosis on glioma context. Methods: Wild-type (wt) A172 cells (WNK2 promoter-methylated), and A172 transfected either with an empty vector (Ev) or with a WNK2 expression vector, were used to assess the cellular basal capacities to promote autophagy, through western blot and flow-cytometry analysis. Additionally, we evaluated the effect of WNK2 on general endocytosis trafficking routes by immunofluorescence. Results: The re-expression of ectopic WNK2 did not interfere with autophagy-related protein light chain 3 (LC3-II) expression levels as well as did not promote mTOR signaling pathway alteration when compared with Ev or wt A172 cells. However, the restoration of WNK2 resulted in a marked increase (8 to 92,4%) of Acidic Vesicular Organelles formation (AVOs). Moreover, our results also suggest that WNK2 cells promotes delay in uptake and internalization rate of cholera toxin B and transferrin ligands. Conclusions: The restoration of WNK2 interferes in vesicular traffic during endocytosis pathway and increase AVOs formation. This results also suggest the role of WNK2 in growth factor receptor turnover related to cell growth and homeostasis and associates one more time, WNK2 silencing contribution in genesis of gliomas.Keywords: autophagy, endocytosis, glioma, WNK2
Procedia PDF Downloads 370745 Approaches to Integrating Entrepreneurial Education in School Curriculum
Authors: Kofi Nkonkonya Mpuangnan, Samantha Govender, Hlengiwe Romualda Mhlongo
Abstract:
In recent years, a noticeable and worrisome pattern has emerged in numerous developing nations which is a steady and persistent rise in unemployment rates. This escalation of economic struggles has become a cause of great concern for parents who, having invested significant resources in their children's education, harboured hopes of achieving economic prosperity and stability for their families through secure employment. To effectively tackle this pressing unemployment issue, it is imperative to adopt a holistic approach, and a pivotal aspect of this approach involves incorporating entrepreneurial education seamlessly into the entire educational system. In this light, the authors explored approaches to integrating entrepreneurial education into school curriculum focusing on the following questions. How can an entrepreneurial mindset among learners be promoted in school? And how far have pedagogical approaches improved entrepreneurship in schools? To find answers to these questions, a systematic literature review underpinned by Human Capital Theory was adopted. This method was supported by the three stages of guidelines like planning, conducting, and reporting. The data were specifically sought from publishers with expansive coverage of scholarly literature like Sage, Taylor & Francis, Emirate, and Springer, covering publications from 1965 to 2023. The search was supported by two broad terms such as promoting entrepreneurial mindset in learners and pedagogical strategies for enhancing entrepreneurship. It was found that acquiring an entrepreneurial mindset through an innovative classroom environment, resilience, and guest speakers and industry experts. Also, teachers can promote entrepreneurial education through the adoption of pedagogical approaches such as hands-on learning and experiential activities, role-playing, business simulation games and creative and innovative teaching. It was recommended that the Ministry of Education should develop tailored training programs and workshops aimed at empowering educators with the essential competencies and insights to deliver impactful entrepreneurial education.Keywords: education, entrepreneurship, school curriculum, pedagogical approaches, integration
Procedia PDF Downloads 97744 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 157743 Dynamic Capabilities and Disorganization: A Conceptual Exploration
Authors: Dinuka Herath, Shelley Harrington
Abstract:
This paper prompts debate about whether disorganization can be positioned as a mechanism that facilitates the creation and enactment of important dynamic capabilities within an organization. This particular article is a conceptual exploration of the link between dynamic capabilities and disorganization and presents the case for agent-based modelling as a viable methodological tool which can be used to explore this link. Dynamic capabilities are those capabilities that an organization needs to sustain competitive advantage in complex environments. Disorganization is the process of breaking down restrictive organizational structures and routines that commonly reside in organizations in order to increase organizational performance. In the 20th century, disorganization was largely viewed as an undesirable phenomenon within an organization. However, the concept of disorganization has been revitalized and garnered research interest in the recent years due to studies which demonstrate some of the advantages of disorganization to an organization. Furthermore, recent Agent-based simulation studies have shown the capability of disorganization to be managed and argue for disorganization to be viewed as an enabler of organizational productivity. Given the natural state of disorganization and resulting fear this can create, this paper argues that instead of trying to ‘correct’ disorganization, it should be actively encouraged to have functional purpose. The study of dynamic capabilities emerged as a result of heightened dynamism and consequentially the very nature of dynamism denotes a level of fluidity and flexibility, something which this paper argues many organizations do not truly foster due to a constrained commitment to organization and order. We argue in this paper that the very state of disorganization is a state that should be encouraged to develop dynamic capabilities needed to not only deal with the complexities of the modern business environment but also to sustain competitive success. The significance of this paper stems from the fact that both dynamic capabilities and disorganization are two concepts that are gaining prominence in their respective academic genres. Despite the attention each concept has received individually, no conceptual link has been established to depict how they actually interact with each other. We argue that the link between these two concepts present a novel way of looking at organizational performance. By doing so, we explore the potential of these two concepts working in tandem in order to increase organizational productivity which has significant implications for both academics and practitioners alike.Keywords: agent-based modelling, disorganization, dynamic capabilities, performance
Procedia PDF Downloads 317742 Application of Biomimetic Approach in Optimizing Buildings Heat Regulating System Using Parametric Design Tools to Achieve Thermal Comfort in Indoor Spaces in Hot Arid Regions
Authors: Aya M. H. Eissa, Ayman H. A. Mahmoud
Abstract:
When it comes to energy efficient thermal regulation system, natural systems do not only offer an inspirational source of innovative strategies but also sustainable and even regenerative ones. Using biomimetic design an energy efficient thermal regulation system can be developed. Although, conventional design process methods achieved fairly efficient systems, they still had limitations which can be overcome by using parametric design software. Accordingly, the main objective of this study is to apply and assess the efficiency of heat regulation strategies inspired from termite mounds in residential buildings’ thermal regulation system. Parametric design software is used to pave the way for further and more complex biomimetic design studies and implementations. A hot arid region is selected due to the deficiency of research in this climatic region. First, the analysis phase in which the stimuli, affecting, and the parameters, to be optimized, are set mimicking the natural system. Then, based on climatic data and using parametric design software Grasshopper, building form and openings height and areas are altered till settling on an optimized solution. Finally, an assessment of the efficiency of the optimized system, in comparison with a conventional system, is determined by firstly, indoors airflow and indoors temperature, by Ansys Fluent (CFD) simulation. Secondly by and total solar radiation falling on the building envelope, which was calculated using Ladybug, Grasshopper plugin. The results show an increase in the average indoor airflow speed from 0.5m/s to 1.5 m/s. Also, a slight decrease in temperature was noticed. And finally, the total radiation was decreased by 4%. In conclusion, despite the fact that applying a single bio-inspired heat regulation strategy might not be enough to achieve an optimum system, the concluded system is more energy efficient than the conventional ones as it aids achieving indoors comfort through passive techniques. Thus demonstrating the potential of parametric design software in biomimetic design.Keywords: biomimicry, heat regulation systems, hot arid regions, parametric design, thermal comfort
Procedia PDF Downloads 295741 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 203740 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 206739 Syngas From Polypropylene Gasification in a Fluidized Bed
Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo
Abstract:
In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle
Procedia PDF Downloads 28738 Achieving Household Electricity Saving Potential Through Behavioral Change
Authors: Lusi Susanti, Prima Fithri
Abstract:
The rapid growth of Indonesia population is directly proportional to the energy needs of the country, but not all of Indonesian population can relish the electricity. Indonesia's electrification ratio is still around 80.1%, which means that approximately 19.9% of households in Indonesia have not been getting the flow of electrical energy. Household electricity consumptions in Indonesia are generally still dominated by the public urban. In the city of Padang, West Sumatera, Indonesia, about 94.10% are power users of government services (PLN). The most important thing of the issue is human resources efficient energy. User behavior in utilizing electricity becomes significant. However repair solution will impact the user's habits sustainable energy issues. This study attempts to identify the user behavior and lifestyle that affect household electricity consumption and to evaluate the potential for energy saving. The behavior component is frequently underestimated or ignored in analyses of household electrical energy end use, partly because of its complexity. It is influenced by socio-demographic factors, culture, attitudes, aesthetic norms and comfort, as well as social and economic variables. Intensive questioner survey, in-depth interview and statistical analysis are carried out to collect scientific evidences of the behavioral based changes instruments to reduce electricity consumption in household sector. The questioner was developed to include five factors assuming affect the electricity consumption pattern in household sector. They are: attitude, energy price, household income, knowledge and other determinants. The survey was carried out in Padang, West Sumatra Province Indonesia. About 210 questioner papers were proportionally distributed to households in 11 districts in Padang. Stratified sampling was used as a method to select respondents. The results show that the household size, income, payment methods and size of house are factors affecting electricity saving behavior in residential sector. Household expenses on electricity are strongly influenced by gender, type of job, level of education, size of house, income, payment method and level of installed power. These results provide a scientific evidence for stakeholders on the potential of controlling electricity consumption and designing energy policy by government in residential sector.Keywords: electricity, energy saving, household, behavior, policy
Procedia PDF Downloads 438737 Investigation of Nucleation and Thermal Conductivity of Waxy Crude Oil on Pipe Wall via Particle Dynamics
Authors: Jinchen Cao, Tiantian Du
Abstract:
As waxy crude oil is easy to crystallization and deposition in the pipeline wall, it causes pipeline clogging and leads to the reduction of oil and gas gathering and transmission efficiency. In this paper, a mesoscopic scale dissipative particle dynamics method is employed, and constructed four pipe wall models, including smooth wall (SW), hydroxylated wall (HW), rough wall (RW), and single-layer graphene wall (GW). Snapshots of the simulation output trajectories show that paraffin molecules interact with each other to form a network structure that constrains water molecules as their nucleation sites. Meanwhile, it is observed that the paraffin molecules on the near-wall side are adsorbed horizontally between inter-lattice gaps of the solid wall. In the pressure range of 0 - 50 MPa, the pressure change has less effect on the affinity properties of SS, HS, and GS walls, but for RS walls, the contact angle between paraffin wax and water molecules was found to decrease with the increase in pressure, while the water molecules showed the opposite trend, the phenomenon is due to the change in pressure, leading to the transition of paraffin wax molecules from amorphous to crystalline state. Meanwhile, the minimum crystalline phase pressure (MCPP) was proposed to describe the lowest pressure at which crystallization of paraffin molecules occurs. The maximum number of crystalline clusters formed by paraffin molecules at MCPP in the system showed NSS (0.52 MPa) > NHS (0.55 MPa) > NRS (0.62 MPa) > NGS (0.75 MPa). The MCPP on the graphene surface, with the least number of clusters formed, indicates that the addition of graphene inhibited the crystallization process of paraffin deposition on the wall surface. Finally, the thermal conductivity was calculated, and the results show that on the near-wall side, the thermal conductivity changes drastically due to the occurrence of adsorption crystallization of paraffin waxes; on the fluid side the thermal conductivity gradually tends to stabilize, and the average thermal conductivity shows: ĸRS(0.254W/(m·K)) > ĸRS(0.249W/(m·K)) > ĸRS(0.218W/(m·K)) > ĸRS(0.188W/(m·K)).This study provides a theoretical basis for improving the transport efficiency and heat transfer characteristics of waxy crude oil in terms of wall type, wall roughness, and MCPP.Keywords: waxy crude oil, thermal conductivity, crystallization, dissipative particle dynamics, MCPP
Procedia PDF Downloads 72736 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 145735 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping
Authors: Masato Saeki
Abstract:
Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level
Procedia PDF Downloads 453734 A Single-Channel BSS-Based Method for Structural Health Monitoring of Civil Infrastructure under Environmental Variations
Authors: Yanjie Zhu, André Jesus, Irwanda Laory
Abstract:
Structural Health Monitoring (SHM), involving data acquisition, data interpretation and decision-making system aim to continuously monitor the structural performance of civil infrastructures under various in-service circumstances. The main value and purpose of SHM is identifying damages through data interpretation system. Research on SHM has been expanded in the last decades and a large volume of data is recorded every day owing to the dramatic development in sensor techniques and certain progress in signal processing techniques. However, efficient and reliable data interpretation for damage detection under environmental variations is still a big challenge. Structural damages might be masked because variations in measured data can be the result of environmental variations. This research reports a novel method based on single-channel Blind Signal Separation (BSS), which extracts environmental effects from measured data directly without any prior knowledge of the structure loading and environmental conditions. Despite the successful application in audio processing and bio-medical research fields, BSS has never been used to detect damage under varying environmental conditions. This proposed method optimizes and combines Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Independent Component Analysis (ICA) together to separate structural responses due to different loading conditions respectively from a single channel input signal. The ICA is applying on dimension-reduced output of EEMD. Numerical simulation of a truss bridge, inspired from New Joban Line Arakawa Railway Bridge, is used to validate this method. All results demonstrate that the single-channel BSS-based method can recover temperature effects from mixed structural response recorded by a single sensor with a convincing accuracy. This will be the foundation of further research on direct damage detection under varying environment.Keywords: damage detection, ensemble empirical mode decomposition (EEMD), environmental variations, independent component analysis (ICA), principal component analysis (PCA), structural health monitoring (SHM)
Procedia PDF Downloads 304733 The Anesthesia Considerations in Robotic Mastectomies
Authors: Amrit Vasdev, Edwin Rho, Gurinder Vasdev
Abstract:
Robotic surgery has enabled a new spectrum of minimally invasive breast reconstruction by improving visualization, surgeon posturing, and improved patient outcomes.1 The DaVinci robot system can be utilized in nipple sparing mastectomies and reconstructions. The process involves the insufflation of the subglandular space and a dissection of the mammary gland with a combination of cautery and blunt dissection. This case outlines a 35-year-old woman who has a long-standing family history of breast cancer and a diagnosis of a deleterious BRCA2 genetic mutation. She has decided to proceed with bilateral nipple sparing mastectomies with implants. Her perioperative mammogram and MRI were negative for masses, however, her left internal mammary lymph node was enlarged. She has taken oral contraceptive pills for 3-5 years and denies DES exposure, radiation therapy, human replacement therapy, or prior breast surgery. She does not smoke and rarely consumes alcohol. During the procedure, the patient received a standardized anesthetic for out-patient surgery of propofol infusion, succinylcholine, sevoflurane, and fentanyl. Aprepitant was given as an antiemetic and preoperative Tylenol and gabapentin for pain management. Concerns for the patient during the procedure included CO2 insufflation into the subcutaneous space. With CO2 insufflation, there is a potential for rapid uptake leading to severe acidosis, embolism, and subcutaneous emphysema.2To mitigate this, it is important to hyperventilate the patient and reduce both the insufflation pressure and the CO2 flow rate to the minimal acceptable by the surgeon. For intraoperative monitoring during this 6-9 hour long procedure, it has been suggested to utilize an Arterial-Line for end-tidal CO2 monitoring. However, in this case, it was not necessary as the patient had excellent cardiovascular reserve, and end-tidal CO2 was within normal limits for the duration of the procedure. A BIS monitor was also utilized to reduce anesthesia burden and to facilitate a prompt discharge from the PACU. Minimal Invasive Robotic Surgery will continue to evolve, and anesthesiologists need to be prepared for the new challenges ahead. Based on our limit number of patients, robotic mastectomy appears to be a safe alternative to open surgery with the promise of clearer tissue demarcation and better cosmetic results.Keywords: anesthesia, mastectomies, robotic, hypercarbia
Procedia PDF Downloads 112732 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis
Authors: Coriolano Salvini, Ambra Giovannelli
Abstract:
The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.
Procedia PDF Downloads 229731 The Correlation between Eye Movements, Attentional Shifting, and Driving Simulator Performance among Adolescents with Attention Deficit Hyperactivity Disorder
Authors: Navah Z. Ratzon, Anat Keren, Shlomit Y. Greenberg
Abstract:
Car accidents are a problem worldwide. Adolescents’ involvement in car accidents is higher in comparison to the overall driving population. Researchers estimate the risk of accidents among adolescents with symptoms of attention-deficit/hyperactivity disorder (ADHD) to be 1.2 to 4 times higher than that of their peers. Individuals with ADHD exhibit unique patterns of eye movements and attentional shifts that play an important role in driving. In addition, deficiencies in cognitive and executive functions among adolescents with ADHD is likely to put them at greater risk for car accidents. Fifteen adolescents with ADHD and 17 matched controls participated in the study. Individuals from both groups attended local public schools and did not have a driver’s license. Participants’ mean age was 16.1 (SD=.23). As part of the experiment, they all completed a driving simulation session, while their eye movements were monitored. Data were recorded by an eye tracker: The entire driving session was recorded, registering the tester’s exact gaze position directly on the screen. Eye movements and simulator data were analyzed using Matlab (Mathworks, USA). Participants’ cognitive and metacognitive abilities were evaluated as well. No correlation was found between saccade properties, regions of interest, and simulator performance in either group, although participants with ADHD allocated more visual scan time (25%, SD = .13%) to a smaller segment of dashboard area, whereas controls scanned the monitor more evenly (15%, SD = .05%). The visual scan pattern found among participants with ADHD indicates a distinct pattern of engagement-disengagement of spatial attention compared to that of non-ADHD participants as well as lower attention flexibility, which likely affects driving. Additionally the lower the results on the cognitive tests, the worse driving performance was. None of the participants had prior driving experience, yet participants with ADHD distinctly demonstrated difficulties in scanning their surroundings, which may impair driving. This stresses the need to consider intervention programs, before driving lessons begin, to help adolescents with ADHD acquire proper driving habits, avoid typical driving errors, and achieve safer driving.Keywords: ADHD, attentional shifting, driving simulator, eye movements
Procedia PDF Downloads 329730 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery
Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović
Abstract:
Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.Keywords: capsaicin, in vitro, patch, RP-LC, transdermal
Procedia PDF Downloads 227729 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 157728 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)
Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami
Abstract:
Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives
Procedia PDF Downloads 221727 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets
Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah
Abstract:
Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs
Procedia PDF Downloads 488726 The Effect of Filter Design and Face Velocity on Air Filter Performance
Authors: Iyad Al-Attar
Abstract:
Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop
Procedia PDF Downloads 135725 Multiscale Process Modeling of Ceramic Matrix Composites
Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya
Abstract:
Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.Keywords: digital engineering, finite elements, manufacturing, molecular dynamics
Procedia PDF Downloads 98