Search results for: capital cost
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7265

Search results for: capital cost

2135 Miracle Fruit Application in Sour Beverages: Effect of Different Concentrations on the Temporal Sensory Profile and Overall Linking

Authors: Jéssica F. Rodrigues, Amanda C. Andrade, Sabrina C. Bastos, Sandra B. Coelho, Ana Carla M. Pinheiro

Abstract:

Currently, there is a great demand for the use of natural sweeteners due to the harmful effects of the high sugar and artificial sweeteners consumption on the health. Miracle fruit, which is known for its unique ability to modify the sour taste in sweet taste, has been shown to be a good alternative sweetener. However, it has a high production cost, being important to optimize lower contents to be used. Thus, the aim of this study was to assess the effect of different miracle fruit contents on the temporal (Time-intensity - TI and Temporal Dominance of Sensations - TDS) sensory profile and overall linking of lemonade, to determine the better content to be used as a natural sweetener in sour beverages. TI and TDS results showed that the concentrations of 150 mg, 300 mg and 600 mg miracle fruit were effective in reducing the acidity and promoting the sweet perception in lemonade. Furthermore, the concentrations of 300 mg and 600 mg obtained similar profiles. Through the acceptance test, the concentration of 300 mg miracle fruit was shown to be an efficient substitute for sucrose and sucralose in lemonade, once they had similar hedonic values between ‘I liked it slightly’ and ‘I liked it moderately’. Therefore, 300mg miracle fruit consists in an adequate content to be used as a natural sweetener of lemonade. The results of this work will help the food industry on the efficient application of a new natural sweetener- the Miracle fruit extract in sour beverages, reducing costs and providing a product that meets the consumer desires.

Keywords: acceptance, natural sweetener, temporal dominance of sensations, time-intensity

Procedia PDF Downloads 231
2134 Cognitive Behaviour Drama: Playful Method to Address Fears in Children on the Higher-End of the Autism Spectrum

Authors: H.Karnezi, K. Tierney

Abstract:

Childhood fears that persist over time and interfere with the children’s normal functioning may have detrimental effects on their social and emotional development. Cognitive behavior therapy is considered highly effective in treating fears and anxieties. However, given that many childhood fears are based on fantasy, the applicability of CBT may be hindered by cognitive immaturity. Furthermore, a lack of motivation to engage in therapy is another commonly encountered obstacle. The purpose of this study was to introduce and evaluate a more developmentally appropriate intervention model, specifically designed to provide phobic children with the motivation to overcome their fears. To this end, principles and techniques from cognitive and behavior therapies are incorporated into the ‘Drama in Education’ model. The Cognitive Behaviour Drama (CBD) method involves using the phobic children’s creativity to involve them in the therapeutic process. The children are invited to engage in exciting fictional scenarios tailored around their strengths and special interests. Once their commitment to the drama is established, a problem that they will feel motivated to solve is introduced. To resolve it, the children will have to overcome a number of obstacles culminating in an in vivo confrontation with the fear stimulus. The study examined the application of the CBD model in three single cases. Results in all three cases shown complete elimination of all fear-related symptoms. Preliminary results justify further evaluation of the Cognitive Behaviour Drama model. It is time and cost-effective, ensuring the clients' immediate engagement in the therapeutic process.

Keywords: phobias, autism, intervention, drama

Procedia PDF Downloads 111
2133 Preserving Digital Arabic Text Integrity Using Blockchain Technology

Authors: Zineb Touati Hamad, Mohamed Ridda Laouar, Issam Bendib

Abstract:

With the massive development of technology today, the Arabic language has gained a prominent position among the languages most used for writing articles, expressing opinions, and also for citing in many websites, defying its growing sensitivity in terms of structure, language skills, diacritics, writing methods, etc. In the context of the spread of the Arabic language, the Holy Quran represents the most prevalent Arabic text today in many applications and websites for citation purposes or for the reading and learning rituals. The Quranic verses / surahs are published quickly and without cost, which may cause great concern to ensure the safety of the content from tampering and alteration. To protect the content of texts from distortion, it is necessary to refer to the original database and conduct a comparison process to extract the percentage of distortion. The disadvantage of this method is that it takes time, in addition to the lack of any guarantee on the integrity of the database itself as it belongs to one central party. Blockchain technology today represents the best way to maintain immutable content. Blockchain is a distributed database that stores information in blocks linked to each other through encryption, where the modification of each block can be easily known. To exploit these advantages, we seek in this paper to justify the use of this technique in preserving the integrity of Arabic texts sensitive to change by building a decentralized framework to authenticate and verify the integrity of the digital Quranic verses/surahs spread on websites.

Keywords: arabic text, authentication, blockchain, integrity, quran, verification

Procedia PDF Downloads 143
2132 Optimal Harmonic Filters Design of Taiwan High Speed Rail Traction System

Authors: Ying-Pin Chang

Abstract:

This paper presents a method for combining a particle swarm optimization with nonlinear time-varying evolution and orthogonal arrays (PSO-NTVEOA) in the planning of harmonic filters for the high speed railway traction system with specially connected transformers in unbalanced three-phase power systems. The objective is to minimize the cost of the filter, the filters loss, the total harmonic distortion of currents and voltages at each bus simultaneously. An orthogonal array is first conducted to obtain the initial solution set. The set is then treated as the initial training sample. Next, the PSO-NTVEOA method parameters are determined by using matrix experiments with an orthogonal array, in which a minimal number of experiments would have an effect that approximates the full factorial experiments. This PSO-NTVEOA method is then applied to design optimal harmonic filters in Taiwan High Speed Rail (THSR) traction system, where both rectifiers and inverters with IGBT are used. From the results of the illustrative examples, the feasibility of the PSO-NTVEOA to design an optimal passive harmonic filter of THSR system is verified and the design approach can greatly reduce the harmonic distortion. Three design schemes are compared that V-V connection suppressing the 3rd order harmonic, and Scott and Le Blanc connection for the harmonic improvement is better than the V-V connection.

Keywords: harmonic filters, particle swarm optimization, nonlinear time-varying evolution, orthogonal arrays, specially connected transformers

Procedia PDF Downloads 376
2131 Patterns, Triggers, and Predictors of Relapses among Children with Steroid Sensitive Idiopathic Nephrotic Syndrome at the University of Abuja Teaching Hospital, Gwagwalada, Abuja, Nigeria

Authors: Emmanuel Ademola Anigilaje, Ibraheem Ishola

Abstract:

Background: Childhood steroid-sensitive idiopathic nephrotic syndrome (SSINS) is plagued with relapses that contribute to its morbidity and the cost of treatment. Materials and Methods: This is a retrospective review of relapses among children with SSINS at the University of Abuja Teaching Hospital from January 2016 to July 2020. Triggers related to relapse incidents were noted. Chi-square test was deployed for predictors (factors at the first clinical presentations that associate with subsequent relapses) of relapses. Predictors with p-values of less than 0.05 were considered significant and 95% confidence intervals (CI) and odd ratio (OR) were described. Results: Sixty SSINS comprising 52 males (86.7%), aged 23 months to 18 years, with a mean age of 7.04±4.16 years were studied. Thirty-eight (63.3%) subjects had 126 relapses including infrequent relapses in 30 (78.9%) and frequent relapses in 8 (21.1%). The commonest triggers were acute upper respiratory tract infections (68, 53.9%) and urinary tract infections (UTIs) in 25 (19.8%) relapses. In 4 (3.2%) relapses, no trigger was identified. The time-to-first relapse ranged 14 days to 365 days with a median time of 60 days. The significant predictors were hypertension (OR=3.4, 95% CI; 1.04-11.09, p=0.038), UTIs (OR=9.9, 95% CI; 1.16-80.71, p= 0.014), malaria fever (OR=8.0, 95% CI; 2.45-26.38, p˂0.001), micro-haematuria (OR=4.9, 95% CI; 11.58-15.16, p=0.004), elevated serum creatinine (OR=12.3, 95%CI; 1.48-101.20, p=0.005) and hypercholesterolaemia (OR=4.1, 95%CI; 1.35-12.63, p=0.011). Conclusion: While the pathogenesis of relapses remains unknown, it is prudent to consider relapse-specific preventive strategies against triggers and predictors of relapses in our setting.

Keywords: Patterns, triggers, predictors, steroid-sensitive idiopathic nephrotic syndrome, relapses, Nigeria

Procedia PDF Downloads 135
2130 Performance of Bored Pile on Alluvial Deposit

Authors: K. Raja Rajan, D. Nagarajan

Abstract:

Bored cast in-situ pile is a popular choice amongst consultant and contractor due to the ability to adjust the pile length suitably in case if any variation found in the actual geological strata. Bangladesh geological strata are dominated by silt content. Design is normally based on field test such as Standard Penetration test N-values. Initially, pile capacity estimated through static formula with co-relation of N-value and angle of internal friction. Initial pile load test was conducted in order to validate the geotechnical parameters assumed in design. Initial pile load test was conducted on 1.5m diameter bored cast in-situ pile. Kentledge method is used to load the pile for 2.5 times of its working load. Initially, safe working load of pile has been estimated as 570T, so test load is fixed to 1425T. Max load applied is 777T for which the settlement reached around 155mm which is more than 10% of diameter of piles. Pile load test results was not satisfactory and compelled to increase the pile length approximately 20% of its total length. Due to unpredictable geotechnical parameters, length of each pile has been increased which is having a major impact on the project cost and as well as in project schedule. Extra bore holes have been planned along with lab test results in order to redefine the assumed geotechnical parameters. This article presents detailed design assumptions of geotechnical parameters in the design stage and the results of pile load test which made to redefine the assumed geotechnical properties.

Keywords: end bearing, pile load test, settlement, shaft friction

Procedia PDF Downloads 241
2129 Higher Education Benefits and Undocumented Students: An Explanatory Model of Policy Adoption

Authors: Jeremy Ritchey

Abstract:

Undocumented immigrants in the U.S. face many challenges when looking to progress in society, especially when pursuing post-secondary education. The majority of research done on state-level policy adoption pertaining to undocumented higher-education pursuits, specifically in-state resident tuition and financial aid eligibility policies, have framed the discussion on the potential and actual impacts which implementation can and has achieved. What is missing is a model to view the social, political and demographic landscapes upon which such policies (in their various forms) find a route to legislative enactment. This research looks to address this gap in the field by investigating the correlations and significant state-level variables which can be operationalized to construct a framework for adoption of these specific policies. In the process, analysis will show that past unexamined conceptualizations of how such policies come to fruition may be limited or contradictory when compared to available data. Circling on the principles of Policy Innovation and Policy Diffusion theory, this study looks to use variables collected via Michigan State University’s Correlates of State Policy Project, a collectively and ongoing compiled database project centered around annual variables (1900-2016) collected from all 50 states relevant to policy research. Using established variable groupings (demographic, political, social capital measurements, and educational system measurements) from the time period of 2000 to 2014 (2001 being when such policies began), one can see how this data correlates with the adoption of policies related to undocumented students and in-state college tuition. After regression analysis, the results will illuminate which variables appears significant and to what effect, as to help formulate a model upon which to explain when adoption appears to occur and when it does not. Early results have shown that traditionally held conceptions on conservative and liberal identities of the state, as they relate to the likelihood of such policies being adopted, did not fall in line with the collected data. Democratic and liberally identified states were, overall, less likely to adopt pro-undocumented higher education policies than Republican and conservatively identified states and vis versa. While further analysis is needed as to improve the model’s explanatory power, preliminary findings are showing promise in widening our understanding of policy adoption factors in this realm of policies compared to the gap of such knowledge in the publications of the field as it currently exists. The model also looks to serve as an important tool for policymakers in framing such potential policies in a way that is congruent with the relevant state-level determining factors while being sensitive to the most apparent sources of potential friction. While additional variable groups and individual variables will ultimately need to be added and controlled for, this research has already begun to demonstrate how shallow or unexamined reasoning behind policy adoption in the realm of this topic needs to be addressed or else the risk is erroneous conceptions leaking into the foundation of this growing and ever important field.

Keywords: policy adoption, in-state tuition, higher education, undocumented immigrants

Procedia PDF Downloads 97
2128 Waste-Based Surface Modification to Enhance Corrosion Resistance of Aluminium Bronze Alloy

Authors: Wilson Handoko, Farshid Pahlevani, Isha Singla, Himanish Kumar, Veena Sahajwalla

Abstract:

Aluminium bronze alloys are well known for their superior abrasion, tensile strength and non-magnetic properties, due to the co-presence of iron (Fe) and aluminium (Al) as alloying elements and have been commonly used in many industrial applications. However, continuous exposure to the marine environment will accelerate the risk of a tendency to Al bronze alloys parts failures. Although a higher level of corrosion resistance properties can be achieved by modifying its elemental composition, it will come at a price through the complex manufacturing process and increases the risk of reducing the ductility of Al bronze alloy. In this research, the use of ironmaking slag and waste plastic as the input source for surface modification of Al bronze alloy was implemented. Microstructural analysis conducted using polarised light microscopy and scanning electron microscopy (SEM) that is equipped with energy dispersive spectroscopy (EDS). An electrochemical corrosion test was carried out through Tafel polarisation method and calculation of protection efficiency against the base-material was determined. Results have indicated that uniform modified surface which is as the result of selective diffusion process, has enhanced corrosion resistance properties up to 12.67%. This approach has opened a new opportunity to access various industrial utilisations in commercial scale through minimising the dependency on natural resources by transforming waste sources into the protective coating in environmentally friendly and cost-effective ways.

Keywords: aluminium bronze, waste-based surface modification, tafel polarisation, corrosion resistance

Procedia PDF Downloads 223
2127 Identifying Enablers and Barriers of Healthcare Knowledge Transfer: A Systematic Review

Authors: Yousuf Nasser Al Khamisi

Abstract:

Purpose: This paper presents a Knowledge Transfer (KT) Framework in healthcare sectors by applying a systematic literature review process to the healthcare organizations domain to identify enablers and barriers of KT in Healthcare. Methods: The paper conducted a systematic literature search of peer-reviewed papers that described key elements of KT using four databases (Medline, Cinahl, Scopus, and Proquest) for a 10-year period (1/1/2008–16/10/2017). The results of the literature review were used to build a conceptual framework of KT in healthcare organizations. The author used a systematic review of the literature, as described by Barbara Kitchenham in Procedures for Performing Systematic Reviews. Findings: The paper highlighted the impacts of using Knowledge Management (KM) concept at a healthcare organization in controlling infectious diseases in hospitals, improving family medicine performance and enhancing quality improvement practices. Moreover, it found that good-coding performance is analytically linked with a knowledge sharing network structure rich in brokerage and hierarchy rather than in density. The unavailability or ignored of the latest evidence on more cost-effective or more efficient delivery approaches leads to increase the healthcare costs and may lead to unintended results. Originality: Search procedure produced 12,093 results, of which 3523 were general articles about KM and KT. The titles and abstracts of these articles had been screened to segregate what is related and what is not. 94 articles identified by the researchers for full-text assessment. The total number of eligible articles after removing un-related articles was 22 articles.

Keywords: healthcare organisation, knowledge management, knowledge transfer, KT framework

Procedia PDF Downloads 126
2126 Applications of Drones in Infrastructures: Challenges and Opportunities

Authors: Jin Fan, M. Ala Saadeghvaziri

Abstract:

Unmanned aerial vehicles (UAVs), also referred to as drones, equipped with various kinds of advanced detecting or surveying systems, are effective and low-cost in data acquisition, data delivery and sharing, which can benefit the building of infrastructures. This paper will give an overview of applications of drones in planning, designing, construction and maintenance of infrastructures. The drone platform, detecting and surveying systems, and post-data processing systems will be introduced, followed by cases with details of the applications. Challenges from different aspects will be addressed. Opportunities of drones in infrastructure include but not limited to the following. Firstly, UAVs equipped with high definition cameras or other detecting equipment are capable of inspecting the hard to reach infrastructure assets. Secondly, UAVs can be used as effective tools to survey and map the landscape to collect necessary information before infrastructure construction. Furthermore, an UAV or multi-UVAs are useful in construction management. UVAs can also be used in collecting roads and building information by taking high-resolution photos for future infrastructure planning. UAVs can be used to provide reliable and dynamic traffic information, which is potentially helpful in building smart cities. The main challenges are: limited flight time, the robustness of signal, post data analyze, multi-drone collaboration, weather condition, distractions to the traffic caused by drones. This paper aims to help owners, designers, engineers and architects to improve the building process of infrastructures for higher efficiency and better performance.

Keywords: bridge, construction, drones, infrastructure, information

Procedia PDF Downloads 108
2125 A Sensor Placement Methodology for Chemical Plants

Authors: Omid Ataei Nia, Karim Salahshoor

Abstract:

In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.

Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter

Procedia PDF Downloads 146
2124 Design, Construction And Validation Of A Simple, Low-cost Phi Meter

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

The use of a phi meter allows for definition of equivalence ratio during a fire test. Previous phi meter designs have used expensive catalysts and had restricted portability due to the large furnace and requirement for pure oxygen. The new design of the phi meter did not require the use of a catalyst. The furnace design was based on the existing micro-scale combustion calorimetry (MCC) furnace and operating conditions based on the secondary oxidizer furnace used in the steady state tube furnace (SSTF). Preliminary tests were conducted to study the effects of varying furnace temperatures on combustion efficiency. The SSTF was chosen to validate the phi meter measurements as it can both pre-set and independently quantify the equivalence ratio during a test. The data were in agreement with the data obtained on the SSTF. It was also validated by a comparison of CO2 yields obtained from the SSTF oxidizer and those obtained by the phi meter. The phi meter designed and constructed in this work was proven to work effectively on a bench-scale. The phi meter was then used to measure the equivalence ratio on a series of large-scale ISO 9705 tests for numerous fire conditions. The materials used were a range of non-homogenous materials such as polyurethane. The measurements corresponded accurately to the data collected, showing the novel design can be used from bench to large-scale tests to measure equivalence ratio. This cheaper, more portable, safer and easier to use phi meter design will enable more widespread use and the ability to quantify fire conditions of tests, allowing for better understanding of flammability and smoke toxicity.

Keywords: phi meter, smoke toxicity, fire condition, ISO9705, novel equipment

Procedia PDF Downloads 89
2123 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time

Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar

Abstract:

The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.

Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors

Procedia PDF Downloads 61
2122 Processing Studies and Challenges Faced in Development of High-Pressure Titanium Alloy Cryogenic Gas Bottles

Authors: Bhanu Pant, Sanjay H. Upadhyay

Abstract:

Frequently, the upper stage of high-performance launch vehicles utilizes cryogenic tank-submerged pressurization gas bottles with high volume-to-weight efficiency to achieve a direct gain in the satellite payload. Titanium alloys, owing to their high specific strength coupled with excellent compatibility with various fluids, are the materials of choice for these applications. Amongst the Titanium alloys, there are two alloys suitable for cryogenic applications, namely Ti6Al4V-ELI and Ti5Al2.5Sn-ELI. The two-phase alpha-beta alloy Ti6Al4V-ELI is usable up to LOX temperature of 90K, while the single-phase alpha alloy Ti5Al2.5Sn-ELI can be used down to LHe temperature of 4 K. The high-pressure gas bottles submerged in the LH2 (20K) can store more amount of gas in as compared to those submerged in LOX (90K) bottles the same volume. Thus, the use of these alpha alloy gas bottles stored at 20K gives a distinct advantage with respect to the need for a lesser number of gas bottles to store the same amount of high-pressure gas, which in turn leads to a one-to-one advantage in the payload in the satellite. The cost advantage to the tune of 15000$/ kg of weight is saved in the upper stages, and, thereby, the satellite payload gain is expected by this change. However, the processing of alpha Ti5Al2.5Sn-ELI alloy gas bottles poses challenges due to the lower forgeability of the alloy and mode of qualification for the critical severe application environment. The present paper describes the processing and challenges/ solutions during the development of these advanced gas bottles for LH2 (20K) applications.

Keywords: titanium alloys, cryogenic gas bottles, alpha titanium alloy, alpha-beta titanium alloy

Procedia PDF Downloads 39
2121 Towards the Modeling of Lost Core Viability in High-Pressure Die Casting: A Fluid-Structure Interaction Model with 2-Phase Flow Fluid Model

Authors: Sebastian Kohlstädt, Michael Vynnycky, Stephan Goeke, Jan Jäckel, Andreas Gebauer-Teichmann

Abstract:

This paper summarizes the progress in the latest computational fluid dynamics research towards the modeling in of lost core viability in high-pressure die casting. High-pressure die casting is a process that is widely employed in the automotive and neighboring industries due to its advantages in casting quality and cost efficiency. The degrees of freedom are however somewhat limited as it has been so far difficult to use lost cores in the process. This is right now changing and the deployment of lost cores is considered a future growth potential for high-pressure die casting companies. The use of this technology itself is difficult though. The strength of the core material, as chiefly salt is used, is limited and experiments have shown that the cores will not hold under all circumstances and process designs. For this purpose, the publicly available CFD library foam-extend (OpenFOAM) is used, and two additional fluid models for incompressible and compressible two-phase flow are implemented as fluid solver models into the FSI library. For this purpose, the volume-of-fluid (VOF) methodology is used. The necessity for the fluid-structure interaction (FSI) approach is shown by a simple CFD model geometry. The model is benchmarked against analytical models and experimental data. Sufficient agreement is found with the analytical models and good agreement with the experimental data. An outlook on future developments concludes the paper.

Keywords: CFD, fluid-structure interaction, high-pressure die casting, multiphase flow

Procedia PDF Downloads 315
2120 Algorithmic Approach to Management of Complications of Permanent Facial Filler: A Saudi Experience

Authors: Luay Alsalmi

Abstract:

Background: Facial filler is the most common type of cosmetic surgery next to botox. Permanent filler is preferred nowadays due to the low cost brought about by non-recurring injection appointments. However, such fillers pose a higher risk for complications, with even greater adverse effects when the procedure is done using unknown dermal filler injections. AIM: This study aimed to establish an algorithm to categorize and manage patients that receive permanent fillers. Materials and Methods: Twelve participants were presented to the service through emergency or as outpatient from November 2015 to May 2021. Demographics such as age, sex, date of injection, time of onset, and types of complications were collected. After examination, all cases were managed based on an algorithm established. FACE-Q was used to measure overall satisfaction and psychological well-being. Results: The algorithm to diagnose and manage these patients effectively with a high satisfaction rate was established in this study. All participants were non-smoker females with no known medical comorbidities. The algorithm presented determined the treatment plan when faced with complications. Results revealed high appearance-related psychosocial distress was observed prior to surgery, while it significantly dropped after surgery. FACE-Q was able to establish evidence of satisfactory ratings among patients prior to and after surgery. Conclusion: This treatment algorithm can guide the surgeon in formulating a suitable plan with fewer complications and a high satisfaction rate.

Keywords: facial filler, FACE-Q, psycho-social stress, botox, treatment algorithm

Procedia PDF Downloads 70
2119 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts

Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti

Abstract:

Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.

Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization

Procedia PDF Downloads 44
2118 Carbon Footprint Assessment and Application in Urban Planning and Geography

Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim

Abstract:

Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.

Keywords: carbon footprint, case study, geography, urban planning

Procedia PDF Downloads 280
2117 Proteomics Application in Disease Diagnosis and Reproduction İmprovement in Cow

Authors: Abdollah Sobhani, Hossein Vaseghi-Dodaran

Abstract:

Proteomics is defined as the study of the component of a cell, tissue and biological fluid. This technique has the potential to identify protein biomarkers of a disease states. In this study which was performed on bovine ovarian follicular cysts (BOFC), eight proteins are over expressed in BOFC that these proteins could be useful biomarkers for BOFC. The difference between serum proteome pattern cows affected by postpartum endometritis with healthy cows revealed that concentrations orosomucoid was decreased in endometritis. The comparison proteome of brucella abortus between laboratory adapted strains and clinical isolates could be useful to better understand this disease and vaccine development. Proteomics experiments identified new proteins and pathways that may be important in future hypothesis-driven studies of glucocorticoid-induced immunosuppression. Understanding the molecular mechanisms of effective parameters on male fertility is essential for obtaining high reproductive efficiency by decreasing cost and time. The investigations on proteome of high fertility spermatozoa indicated that expression of some proteins such as casein kinase 2 (CKII) prime poly peptide and tyrosine kinase in high fertility spermatozoa was higher compared to low fertility spermatozoa. Also, some evidence has indicated that variation in protein types and amounts in seminal fluid regulates fertility indexes in dairy bull. In conclusion, proteomics is a useful technique for discovering drugs, vaccine development, and diagnosis disease by biomarkers and improvement of reproduction efficiency.

Keywords: proteomics, reproduction, biomarker, immunity

Procedia PDF Downloads 394
2116 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: relative permeabilty, porosity, 1-D black oil simulator, capillary pressures

Procedia PDF Downloads 431
2115 Effectiveness of Lowering the Water Table as a Mitigation Measure for Foundation Settlement in Liquefiable Soils Using 1-g Scale Shake Table Test

Authors: Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed

Abstract:

An earthquake is an unpredictable natural disaster. It induces liquefaction, which causes considerable damage to the structure, life support, and piping systems because of ground settlement. As a result, people are incredibly concerned about how to resolve the situation. Previous researchers adopted different ground improvement techniques to reduce the settlement of the structure during earthquakes. This study evaluates the effectiveness of lowering the water table as a technique to mitigate foundation settlement in liquefiable soil. The performance will be evaluated based on foundation settlement and the reduction of excessive pore water pressure. In this study, a scaled model was prepared based on a full-scale shale table experiment conducted at the University of California, San Diego (UCSD). The model ground consists of three soil layers having a relative density of 55%, 45%, and 90%, respectively. A shallow foundation is seated over an unsaturated crust layer. After preparation of the model ground, the water table was measured to be at 45, 40, and 35 cm (from the bottom). Then, the input motions were applied for 10 seconds, with a peak acceleration of 0.25g and a constant frequency of 2.73 Hz. Based on the experimental results, the effectiveness of the lowering water table in reducing the foundation settlement and excess pore water pressure was evident. The foundation settlement was reduced from 50 mm to 5 mm. In addition, lowering the water table as a mitigation measure is a cost-effective way to decrease liquefaction-induced building settlement.

Keywords: foundation settlement, ground water table, liquefaction, hake table test

Procedia PDF Downloads 97
2114 Comparative Analysis of Yield before and after Access to Extension Services among Crop Farmers in Bauchi Local Government Area of Bauchi State, Nigeria

Authors: U. S. Babuga, A. H. Danwanka, A. Garba

Abstract:

The research was carried out to compare the yield of respondents before and after access to extension services on crop production technologies in the study area. Data were collected from the study area through questionnaires administered to seventy-five randomly selected respondents. Data were analyzed using descriptive statistics, t-test and regression models. The result disclosed that majority (97%) of the respondent attended one form of school or the other. The majority (78.67%) of the respondents had farm size ranging between 1-3 hectares. The majority of the respondent adopt improved variety of crops, plant spacing, herbicide, fertilizer application, land preparation, crop protection, crop processing and storage of farm produce. The result of the t-test between the yield of respondents before and after access to extension services shows that there was a significant (p<0.001) difference in yield before and after access to extension. It also indicated that farm size was significant (p<0.001) while household size, years of farming experience and extension contact were significant at (p<0.005). The major constraint to adoption of crop production technologies were shortage of extension agents, high cost of technology and lack of access to credit facility. The major pre-requisite for the improvement of extension service are employment of more extension agents or workers and adequate training. Adequate agricultural credit to farmers at low interest rates will enhance their adoption of crop production technologies.

Keywords: comparative, analysis, yield, access, extension

Procedia PDF Downloads 344
2113 [Keynote Talk]: sEMG Interface Design for Locomotion Identification

Authors: Rohit Gupta, Ravinder Agarwal

Abstract:

Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.

Keywords: classifiers, feature selection, locomotion, sEMG

Procedia PDF Downloads 276
2112 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron

Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni

Abstract:

The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.

Keywords: bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow

Procedia PDF Downloads 326
2111 Optimization of Bifurcation Performance on Pneumatic Branched Networks in next Generation Soft Robots

Authors: Van-Thanh Ho, Hyoungsoon Lee, Jaiyoung Ryu

Abstract:

Efficient pressure distribution within soft robotic systems, specifically to the pneumatic artificial muscle (PAM) regions, is essential to minimize energy consumption. This optimization involves adjusting reservoir pressure, pipe diameter, and branching network layout to reduce flow speed and pressure drop while enhancing flow efficiency. The outcome of this optimization is a lightweight power source and reduced mechanical impedance, enabling extended wear and movement. To achieve this, a branching network system was created by combining pipe components and intricate cross-sectional area variations, employing the principle of minimal work based on a complete virtual human exosuit. The results indicate that modifying the cross-sectional area of the branching network, gradually decreasing it, reduces velocity and enhances momentum compensation, preventing flow disturbances at separation regions. These optimized designs achieve uniform velocity distribution (uniformity index > 94%) prior to entering the connection pipe, with a pressure drop of less than 5%. The design must also consider the length-to-diameter ratio for fluid dynamic performance and production cost. This approach can be utilized to create a comprehensive PAM system, integrating well-designed tube networks and complex pneumatic models.

Keywords: pneumatic artificial muscles, pipe networks, pressure drop, compressible turbulent flow, uniformity flow, murray's law

Procedia PDF Downloads 56
2110 Carbon Dioxide Hydrogenation to Methanol over Cu/ZnO-SBA-15 Catalyst: Effect of Metal Loading

Authors: S. F. H. Tasfy, N. A. M. Zabidi, M.-S. Shaharun

Abstract:

Utilization of CO2 as a carbon source to produce valuable chemicals is one of the important ways to reduce the global warming caused by increasing CO2 in the atmosphere. Supported metal catalysts are crucial for the production of clean and renewable fuels and chemicals from the stable CO2 molecules. The catalytic conversion of CO2 into methanol is recently under increased scrutiny as an opportunity to be used as a low-cost carbon source. Therefore, series of the bimetallic Cu/ZnO-based catalyst supported by SBA-15 were synthesized via impregnation technique with different total metal loading and tested in the catalytic hydrogenation of CO2 to methanol. The morphological and textural properties of the synthesized catalysts were determined by transmission electron microscopy (TEM), temperature programmed desorption, reduction, oxidation and pulse chemisorption (TPDRO), and N2-adsorption. The CO2 hydrogenation reaction was performed in microactivity fixed-bed system at 250 °C, 2.25 MPa, and H2/CO2 ratio of 3. Experimental results showed that the catalytic structure and performance was strongly affected by the loading of the active site. Where, the catalytic activity, methanol selectivity as well as the space-time yield increased with increasing the metal loading until it reaches the maximum values at a metal loading of 15 wt% while further addition of metal inhibits the catalytic performance. The higher catalytic activity of 14 % and methanol selectivity of 92 % were obtained over Cu/ZnO-SBA-15 catalyst with total bimetallic loading of 15 wt%. The excellent performance of 15 wt% Cu/ZnO-SBA-15 catalyst is attributed to the presence of well disperses active sites with small particle size, higher Cu surface area, and lower catalytic reducibility.

Keywords: hydrogenation of carbon dioxide, methanol synthesis, metal loading, Cu/ZnO-SBA-15 catalyst

Procedia PDF Downloads 215
2109 Behavior of Composite Reinforced Concrete Circular Columns with Glass Fiber Reinforced Polymer I-Section

Authors: Hiba S. Ahmed, Abbas A. Allawi, Riyadh A. Hindi

Abstract:

Pultruded materials made of fiber-reinforced polymer (FRP) come in a broad range of shapes, such as bars, I-sections, C-sections, and other structural sections. These FRP materials are starting to compete with steel as structural materials because of their great resistance, low self-weight, and cheap maintenance costs-especially in corrosive conditions. This study aimed to evaluate the effectiveness of Glass Fiber Reinforced Polymer (GFRP) of the hybrid columns built by combining (GFRP) profiles with concrete columns because of their low cost and high structural efficiency. To achieve the aims of this study, nine circular columns with a diameter of (150 mm) and a height of (1000mm) were cast using normal concrete with compression strength equal to (35 MPa). The research involved three different types of reinforcement: hybrid circular columns type (IG) with GFRP I-section and 1% of the reinforcement ratio of steel bars, hybrid circular columns type (IS) with steel I-section and 1% of the reinforcement ratio of steel bars, (where the cross-section area of I-section for GFRP and steel was the same), compared with reference column (R) without I-section. To investigate the ultimate capacity, axial and lateral deformation, strain in longitudinal and transverse reinforcement, and failure mode of the circular column under different loading conditions (concentric and eccentric) with eccentricities of 25 mm and 50 mm, respectively. In the second part, an analytical finite element model will be performed using ABAQUS software to validate the experimental results.

Keywords: composite, columns, reinforced concrete, GFRP, axial load

Procedia PDF Downloads 37
2108 Radio Frequency Identification Device Based Emergency Department Critical Care Billing: A Framework for Actionable Intelligence

Authors: Shivaram P. Arunachalam, Mustafa Y. Sir, Andy Boggust, David M. Nestler, Thomas R. Hellmich, Kalyan S. Pasupathy

Abstract:

Emergency departments (EDs) provide urgent care to patients throughout the day in a complex and chaotic environment. Real-time location systems (RTLS) are increasingly being utilized in healthcare settings, and have shown to improve safety, reduce cost, and increase patient satisfaction. Radio Frequency Identification Device (RFID) data in an ED has been shown to compute variables such as patient-provider contact time, which is associated with patient outcomes such as 30-day hospitalization. These variables can provide avenues for improving ED operational efficiency. A major challenge with ED financial operations is under-coding of critical care services due to physicians’ difficulty reporting accurate times for critical care provided under Current Procedural Terminology (CPT) codes 99291 and 99292. In this work, the authors propose a framework to optimize ED critical care billing using RFID data. RFID estimated physician-patient contact times could accurately quantify direct critical care services which will help model a data-driven approach for ED critical care billing. This paper will describe the framework and provide insights into opportunities to prevent under coding as well as over coding to avoid insurance audits. Future work will focus on data analytics to demonstrate the feasibility of the framework described.

Keywords: critical care billing, CPT codes, emergency department, RFID

Procedia PDF Downloads 116
2107 Automated Distribution System Management: Substation Remote Diagnostic and Operation Solution for Obafemi Awolowo University

Authors: Aderonke Oluseun Akinwumi, Olusola A. Komolaf

Abstract:

This paper gives information about the wide array of challenges facing both the electric utilities and consumers in the distribution system in developing countries, using Obafemi Awolowo University, Ile-Ife Nigeria as a case study. It also proffers cost-effective solution through remote monitoring, diagnostic and operation of distribution networks without compromising the system reliability. As utilities move from manned and unintelligent networks to completely unmanned smart grids, switching activities at substations and feeders will be managed and controlled remotely by dedicated systems hence this design. The Substation Remote Diagnostic and Operation Solution (sRDOs) would remotely monitor the load on Medium Voltage (MV) and Low Voltage (LV) feeders as well as distribution transformers and allow the utility disconnect non-paying customers with absolutely no extra resource deployment and without interrupting supply to paying customers. The aftermath of the implementation of this design improved the lifetime of key distribution infrastructure by automatically isolating feeders during overload conditions and more importantly erring consumers. This increased the ratio of revenue generated on electricity bills to total network load.

Keywords: electric utility, consumers, remote monitoring, diagnostic, system reliability, manned and unintelligent networks, unmanned smart grids, switching activities, medium voltage, low voltage, distribution transformer

Procedia PDF Downloads 114
2106 Presenting a Model in the Analysis of Supply Chain Management Components by Using Statistical Distribution Functions

Authors: Ramin Rostamkhani, Thurasamy Ramayah

Abstract:

One of the most important topics of today’s industrial organizations is the challenging issue of supply chain management. In this field, scientists and researchers have published numerous practical articles and models, especially in the last decade. In this research, to our best knowledge, the discussion of data modeling of supply chain management components using well-known statistical distribution functions has been considered. The world of science owns mathematics, and showing the behavior of supply chain data based on the characteristics of statistical distribution functions is innovative research that has not been published anywhere until the moment of doing this research. In an analytical process, describing different aspects of functions including probability density, cumulative distribution, reliability, and failure function can reach the suitable statistical distribution function for each of the components of the supply chain management. It can be applied to predict the behavior data of the relevant component in the future. Providing a model to adapt the best statistical distribution function in the supply chain management components will be a big revolution in the field of the behavior of the supply chain management elements in today's industrial organizations. Demonstrating the final results of the proposed model by introducing the process capability indices before and after implementing it alongside verifying the approach through the relevant assessment as an acceptable verification is a final step. The introduced approach can save the required time and cost to achieve the organizational goals. Moreover, it can increase added value in the organization.

Keywords: analyzing, process capability indices, statistical distribution functions, supply chain management components

Procedia PDF Downloads 77