Search results for: aerodynamics-strength coupled optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4567

Search results for: aerodynamics-strength coupled optimization

397 Digital Image Correlation Based Mechanical Response Characterization of Thin-Walled Composite Cylindrical Shells

Authors: Sthanu Mahadev, Wen Chan, Melanie Lim

Abstract:

Anisotropy dominated continuous-fiber composite materials have garnered attention in numerous mechanical and aerospace structural applications. Tailored mechanical properties in advanced composites can exhibit superiority in terms of stiffness-to-weight ratio, strength-to-weight ratio, low-density characteristics, coupled with significant improvements in fatigue resistance as opposed to metal structure counterparts. Extensive research has demonstrated their core potential as more than just mere lightweight substitutes to conventional materials. Prior work done by Mahadev and Chan focused on formulating a modified composite shell theory based prognosis methodology for investigating the structural response of thin-walled circular cylindrical shell type composite configurations under in-plane mechanical loads respectively. The prime motivation to develop this theory stemmed from its capability to generate simple yet accurate closed-form analytical results that can efficiently characterize circular composite shell construction. It showcased the development of a novel mathematical framework to analytically identify the location of the centroid for thin-walled, open cross-section, curved composite shells that were characterized by circumferential arc angle, thickness-to-mean radius ratio, and total laminate thickness. Ply stress variations for curved cylindrical shells were analytically examined under the application of centric tensile and bending loading. This work presents a cost-effective, small-platform experimental methodology by taking advantage of the full-field measurement capability of digital image correlation (DIC) for an accurate assessment of key mechanical parameters such as in-plane mechanical stresses and strains, centroid location etc. Mechanical property measurement of advanced composite materials can become challenging due to their anisotropy and complex failure mechanisms. Full-field displacement measurements are well suited for characterizing the mechanical properties of composite materials because of the complexity of their deformation. This work encompasses the fabrication of a set of curved cylindrical shell coupons, the design and development of a novel test-fixture design and an innovative experimental methodology that demonstrates the capability to very accurately predict the location of centroid in such curved composite cylindrical strips via employing a DIC based strain measurement technique. Error percentage difference between experimental centroid measurements and previously estimated analytical centroid results are observed to be in good agreement. The developed analytical modified-shell theory provides the capability to understand the fundamental behavior of thin-walled cylindrical shells and offers the potential to generate novel avenues to understand the physics of such structures at a laminate level.

Keywords: anisotropy, composites, curved cylindrical shells, digital image correlation

Procedia PDF Downloads 283
396 Energy Reclamation in Micro Cavitating Flow

Authors: Morteza Ghorbani, Reza Ghorbani

Abstract:

Cavitation phenomenon has attracted much attention in the mechanical and biomedical technologies. Despite the simplicity and mostly low cost of the devices generating cavitation bubbles, the physics behind the generation and collapse of these bubbles particularly in micro/nano scale has still not well understood. In the chemical industry, micro/nano bubble generation is expected to be applicable to the development of porous materials such as microcellular plastic foams. Moreover, it was demonstrated that the presence of micro/nano bubbles on a surface reduced the adsorption of proteins. Thus, the micro/nano bubbles could act as antifouling agents. Micro and nano bubbles were also employed in water purification, froth floatation, even in sonofusion, which was not completely validated. Small bubbles could also be generated using micro scale hydrodynamic cavitation. In this study, compared to the studies available in the literature, we are proposing a novel approach in micro scale utilizing the energy produced during the interaction of the spray affected by the hydrodynamic cavitating flow and a thin aluminum plate. With a decrease in the size, cavitation effects become significant. It is clearly shown that with the aid of hydrodynamic cavitation generated inside the micro/mini-channels in addition to the optimization of the distance between the tip of the microchannel configuration and the solid surface, surface temperatures can be increased up to 50C under the conditions of this study. The temperature rise on the surfaces near the collapsing small bubbles was exploited for energy harvesting in small scale, in such a way that miniature, cost-effective, and environmentally friendly energy-harvesting devices can be developed. Such devices will not require any external power and moving parts in contrast to common energy-harvesting devices, such as those involving piezoelectric materials and micro engine. Energy harvesting from thermal energy has been widely exploited to achieve energy savings and clean technologies. We are proposing a cost effective and environmentally friendly solution for the growing individual energy needs thanks to the energy application of cavitating flows. The necessary power for consumer devices, such as cell phones and laptops, can be provided using this approach. Thus, this approach has the potential for solving personal energy needs in an inexpensive and environmentally friendly manner and can trigger a shift of paradigm in energy harvesting.

Keywords: cavitation, energy, harvesting, micro scale

Procedia PDF Downloads 173
395 Dynamic Analysis and Clutch Adaptive Prefill in Dual Clutch Transmission

Authors: Bin Zhou, Tongli Lu, Jianwu Zhang, Hongtao Hao

Abstract:

Dual clutch transmissions (DCT) offer a high comfort performance in terms of the gearshift. Hydraulic multi-disk clutches are the key components of DCT, its engagement determines the shifting comfort. The prefill of the clutches requests an initial engagement which the clutches just contact against each other but not transmit substantial torque from the engine, this initial clutch engagement point is called the touch point. Open-loop control is typically implemented for the clutch prefill, a lot of uncertainties, such as oil temperature and clutch wear, significantly affects the prefill, probably resulting in an inappropriate touch point. Underfill causes the engine flaring in gearshift while overfill arises clutch tying up, both deteriorating the shifting comfort of DCT. Therefore, it is important to enable an adaptive capacity for the clutch prefills regarding the uncertainties. In this paper, a dynamic model of the hydraulic actuator system is presented, including the variable force solenoid and clutch piston, and validated by a test. Subsequently, the open-loop clutch prefill is simulated based on the proposed model. Two control parameters of the prefill, fast fill time and stable fill pressure is analyzed with regard to the impact on the prefill. The former has great effects on the pressure transients, the latter directly influences the touch point. Finally, an adaptive method is proposed for the clutch prefill during gear shifting, in which clutch fill control parameters are adjusted adaptively and continually. The adaptive strategy is changing the stable fill pressure according to the current clutch slip during a gearshift, improving the next prefill process. The stable fill pressure is increased by means of the clutch slip while underfill and decreased with a constant value for overfill. The entire strategy is designed in the Simulink/Stateflow, and implemented in the transmission control unit with optimization. Road vehicle test results have shown the strategy realized its adaptive capability and proven it improves the shifting comfort.

Keywords: clutch prefill, clutch slip, dual clutch transmission, touch point, variable force solenoid

Procedia PDF Downloads 293
394 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building

Authors: Zehra Aybike Kılıç, Alpin Köknel Yener

Abstract:

Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.

Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort

Procedia PDF Downloads 159
393 Hydrogeomatic System for the Economic Evaluation of Damage by Flooding in Mexico

Authors: Alondra Balbuena Medina, Carlos Diaz Delgado, Aleida Yadira Vilchis Fránces

Abstract:

In Mexico, each year news is disseminated about the ravages of floods, such as the total loss of housing, damage to the fields; the increase of the costs of the food, derived from the losses of the harvests, coupled with health problems such as skin infection, etc. In addition to social problems such as delinquency, damage in education institutions and the population in general. The flooding is a consequence of heavy rains, tropical storms and or hurricanes that generate excess water in drainage systems that exceed its capacity. In urban areas, heavy rains can be one of the main factors in causing flooding, in addition to excessive precipitation, dam breakage, and human activities, for example, excessive garbage in the strainers. In agricultural areas, these can hardly achieve large areas of cultivation. It should be mentioned that for both areas, one of the significant impacts of floods is that they can permanently affect the livelihoods of many families, cause damage, for example in their workplaces such as farmlands, commercial or industry areas and where services are provided. In recent years, Information and Communication Technologies (ICT) have had an accelerated development, being reflected in the growth and the exponential evolution of the innovation giving; as a result, the daily generation of new technologies, updates, and applications. Innovation in the development of Information Technology applications has impacted on all areas of human activity. They influence all the orders of life of individuals, reconfiguring the way of perceiving and analyzing the world such as, for instance, interrelating with people as individuals and as a society, in the economic, political, social, cultural, educational, environmental, etc. Therefore the present work describes the creation of a system of calculation of flood costs for housing areas, retail establishments and agricultural areas from the Mexican Republic, based on the use and application of geotechnical tools being able to be useful for the benefit of the sectors of public, education and private. To generate analysis of hydrometereologic affections and with the obtained results to realize the Geoinformatics tool was constructed from two different points of view: the geoinformatic (design and development of GIS software) and the methodology of flood damage validation in order to integrate a tool that provides the user the monetary estimate of the effects caused by the floods. With information from the period 2000-2014, the functionality of the application was corroborated. For the years 2000 to 2009 only the analysis of the agricultural and housing areas was carried out, incorporating for the commercial establishment's information of the period 2010 - 2014. The method proposed for the resolution of this research project is a fundamental contribution to society, in addition to the tool itself. Therefore, it can be summarized that the problems that are in the physical-geographical environment, conceiving them from the point of view of the spatial analysis, allow to offer different alternatives of solution and also to open up slopes towards academia and research.

Keywords: floods, technological innovation, monetary estimation, spatial analysis

Procedia PDF Downloads 198
392 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 52
391 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic

Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink

Abstract:

Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.

Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction

Procedia PDF Downloads 150
390 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection

Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic

Abstract:

In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.

Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection

Procedia PDF Downloads 115
389 The U.S. Missile Defense Shield and Global Security Destabilization: An Inconclusive Link

Authors: Michael A. Unbehauen, Gregory D. Sloan, Alberto J. Squatrito

Abstract:

Missile proliferation and global stability are intrinsically linked. Missile threats continually appear at the forefront of global security issues. North Korea’s recently demonstrated nuclear and intercontinental ballistic missile (ICBM) capabilities, for the first time since the Cold War, renewed public interest in strategic missile defense capabilities. To protect from limited ICBM attacks from so-called rogue actors, the United States developed the Ground-based Midcourse Defense (GMD) system. This study examines if the GMD missile defense shield has contributed to a safer world or triggered a new arms race. Based upon increased missile-related developments and the lack of adherence to international missile treaties, it is generally perceived that the GMD system is a destabilizing factor for global security. By examining the current state of arms control treaties as well as existing missile arsenals and ongoing efforts in technologies to overcome U.S. missile defenses, this study seeks to analyze the contribution of GMD to global stability. A thorough investigation cannot ignore that, through the establishment of this limited capability, the U.S. violated longstanding, successful weapons treaties and caused concern among states that possess ICBMs. GMD capability contributes to the perception that ICBM arsenals could become ineffective, creating an imbalance in favor of the United States, leading to increased global instability and tension. While blame for the deterioration of global stability and non-adherence to arms control treaties is often placed on U.S. missile defense, the facts do not necessarily support this view. The notion of a renewed arms race due to GMD is supported neither by current missile arsenals nor by the inevitable development of new and enhanced missile technology, to include multiple independently targeted reentry vehicles (MIRVs), maneuverable reentry vehicles (MaRVs), and hypersonic glide vehicles (HGVs). The methodology in this study encapsulates a period of time, pre- and post-GMD introduction, while analyzing international treaty adherence, missile counts and types, and research in new missile technologies. The decline in international treaty adherence, coupled with a measurable increase in the number and types of missiles or research in new missile technologies during the period after the introduction of GMD, could be perceived as a clear indicator of GMD contributing to global instability. However, research into improved technology (MIRV, MaRV and HGV) prior to GMD, as well as a decline of various global missile inventories and testing of systems during this same period, would seem to invalidate this theory. U.S. adversaries have exploited the perception of the U.S. missile defense shield as a destabilizing factor as a pretext to strengthen and modernize their militaries and justify their policies. As a result, it can be concluded that global stability has not significantly decreased due to GMD; but rather, the natural progression of technological and missile development would inherently include innovative and dynamic approaches to target engagement, deterrence, and national defense.

Keywords: arms control, arms race, global security, GMD, ICBM, missile defense, proliferation

Procedia PDF Downloads 124
388 Inverse Saturable Absorption in Non-linear Amplifying Loop Mirror Mode-Locked Fiber Laser

Authors: Haobin Zheng, Xiang Zhang, Yong Shen, Hongxin Zou

Abstract:

The research focuses on mode-locked fiber lasers with a non-linear amplifying loop mirror (NALM). Although these lasers have shown potential, they still have limitations in terms of low repetition rate. The self-starting of mode-locking in NALM is influenced by the cross-phase modulation (XPM) effect, which has not been thoroughly studied. The aim of this study is two-fold. First, to overcome the difficulties associated with increasing the repetition rate in mode-locked fiber lasers with NALM. Second, to analyze the influence of XPM on self-starting of mode-locking. The power distributions of two counterpropagating beams in the NALM and the differential non-linear phase shift (NPS) accumulations are calculated. The analysis is conducted from the perspective of NPS accumulation. The differential NPSs for continuous wave (CW) light and pulses in the fiber loop are compared to understand the inverse saturable absorption (ISA) mechanism during pulse formation in NALM. The study reveals a difference in differential NPSs between CW light and pulses in the fiber loop in NALM. This difference leads to an ISA mechanism, which has not been extensively studied in artificial saturable absorbers. The ISA in NALM provides an explanation for experimentally observed phenomena, such as active mode-locking initiation through tapping the fiber or fine-tuning light polarization. These findings have important implications for optimizing the design of NALM and reducing the self-starting threshold of high-repetition-rate mode-locked fiber lasers. This study contributes to the theoretical understanding of NALM mode-locked fiber lasers by exploring the ISA mechanism and its impact on self-starting of mode-locking. The research fills a gap in the existing knowledge regarding the XPM effect in NALM and its role in pulse formation. This study provides insights into the ISA mechanism in NALM mode-locked fiber lasers and its role in selfstarting of mode-locking. The findings contribute to the optimization of NALM design and the reduction of self-starting threshold, which are essential for achieving high-repetition-rate operation in fiber lasers. Further research in this area can lead to advancements in the field of mode-locked fiber lasers with NALM.

Keywords: inverse saturable absorption, NALM, mode-locking, non-linear phase shift

Procedia PDF Downloads 85
387 Bulk-Density and Lignocellulose Composition: Influence of Changing Lignocellulosic Composition on Bulk-Density during Anaerobic Digestion and Implication of Compacted Lignocellulose Bed on Mass Transfer

Authors: Aastha Paliwal, H. N. Chanakya, S. Dasappa

Abstract:

Lignocellulose, as an alternate feedstock for biogas production, has been an active area of research. However, lignocellulose poses a lot of operational difficulties- widespread variation in the structural organization of lignocellulosic matrix, amenability to degradation, low bulk density, to name a few. Amongst these, the low bulk density of the lignocellulosic feedstock is crucial to the process operation and optimization. Low bulk densities render the feedstock floating in conventional liquid/wet digesters. Low bulk densities also restrict the maximum achievable organic loading rate in the reactor, decreasing the power density of the reactor. However, during digestion, lignocellulose undergoes very high compaction (up to 26 times feeding density). This first reduces the achievable OLR (because of low feeding density) and compaction during digestion, then renders the reactor space underutilized and also imposes significant mass transfer limitations. The objective of this paper was to understand the effects of compacting lignocellulose on mass transfer and the influence of loss of different components on the bulk density and hence structural integrity of the digesting lignocellulosic feedstock. 10 different lignocellulosic feedstocks (monocots and dicots) were digested anaerobically in a fed-batch, leach bed reactor -solid-state stratified bed reactor (SSBR). Percolation rates of the recycled bio-digester liquid (BDL) were also measured during the reactor run period to understand the implication of compaction on mass transfer. After 95 ds, in a destructive sampling, lignocellulosic feedstocks digested at different SRT were investigated to quantitate the weekly changes in bulk density and lignocellulosic composition. Further, percolation rate data was also compared to bulk density data. Results from the study indicate loss of hemicellulose (r²=0.76), hot water extractives (r²=0.68), and oxalate extractives (r²=0.64) had dominant influence on changing the structural integrity of the studied lignocellulose during anaerobic digestion. Further, feeding bulk density of the lignocellulose can be maintained between 300-400kg/m³ to achieve higher OLR, and bulk density of 440-500kg/m³ incurs significant mass transfer limitation for high compacting beds of dicots.

Keywords: anaerobic digestion, bulk density, feed compaction, lignocellulose, lignocellulosic matrix, cellulose, hemicellulose, lignin, extractives, mass transfer

Procedia PDF Downloads 142
386 Backwash Optimization for Drinking Water Treatment Biological Filters

Authors: Sarra K. Ikhlef, Onita Basu

Abstract:

Natural organic matter (NOM) removal efficiency using drinking water treatment biological filters can be highly influenced by backwashing conditions. Backwashing has the ability to remove the accumulated biomass and particles in order to regenerate the biological filters' removal capacity and prevent excessive headloss buildup. A lab scale system consisting of 3 biological filters was used in this study to examine the implications of different backwash strategies on biological filtration performance. The backwash procedures were evaluated based on their impacts on dissolved organic carbon (DOC) removals, biological filters’ biomass, backwash water volume usage, and particle removal. Results showed that under nutrient limited conditions, the simultaneous use of air and water under collapse pulsing conditions lead to a DOC removal of 22% which was significantly higher (p>0.05) than the 12% removal observed under water only backwash conditions. Employing a bed expansion of 20% under nutrient supplemented conditions compared to a 30% reference bed expansion while using the same amount of water volume lead to similar DOC removals. On the other hand, utilizing a higher bed expansion (40%) lead to significantly lower DOC removals (23%). Also, a backwash strategy that reduced the backwash water volume usage by about 20% resulted in similar DOC removals observed with the reference backwash. The backwash procedures investigated in this study showed no consistent impact on biological filters' biomass concentrations as measured by the phospholipids and the adenosine tri-phosphate (ATP) methods. Moreover, none of these two analyses showed a direct correlation with DOC removal. On the other hand, dissolved oxygen (DO) uptake showed a direct correlation with DOC removals. The addition of the extended terminal subfluidization wash (ETSW) demonstrated no apparent impact on DOC removals. ETSW also successfully eliminated the filter ripening sequence (FRS). As a result, the additional water usage resulting from implementing ETSW was compensated by water savings after restart. Results from this study provide insight to researchers and water treatment utilities on how to better optimize the backwashing procedure for the goal of optimizing the overall biological filtration process.

Keywords: biological filtration, backwashing, collapse pulsing, ETSW

Procedia PDF Downloads 254
385 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit

Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek

Abstract:

In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.

Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage

Procedia PDF Downloads 248
384 Impact of the COVID-19 Pandemic and Social Isolation on the Clients’ Experiences in Counselling and their Access to Services: Perspectives of Violence Against Women Program Staff - A Qualitative Study

Authors: Habiba Nahzat, Karen Crow, Lisa Manuel, Maria Huijbregts

Abstract:

Background and Rationale: The World Health Organization (WHO) declared COVID-19 a pandemic on March 11, 2020. Shortly after, the Ontario provincial and Toronto municipal governments also released multiple directives that led to the mass closure of businesses both in the public and private sectors. Recent research has identified connections between Intimate Partner Violence (IPV) and COVID-19 related stressors - especially because of lockdown and social isolation measures. Psychological impacts of lengthy seclusion coupled with disconnection from extended family and diminished support services can take a toll on families at risk and may increase mental health issues and the prevalence of IPV. Research Question: Thus, the purpose of the study was to understand the perspective of the Violence Against Women (VAW) program staff on the impact of the COVID-19 pandemic; we especially wanted to understand staff views of restrictions on clients’ counseling experiences and the ability to access services in general. The study also aimed to examine VAW program staff experiences regarding remote work and explore how the pandemic restriction measures affected the ability of their program operations to support their clients and each other. Method: A cross-sectional, descriptive qualitative study was conducted with a purposive sample of 9 VAW program staff – eight VAW counselors and one VAW manager. Prior to data collection, program staff collaborated in the development of the study purpose, interview questions and methodology. Ethics approval was obtained from the sponsoring organization’s Research Ethics Board. In-depth individual interviews were conducted with study participants using a semi-structured interview questionnaire. Brief demographic information was also collected prior to the interview. Descriptive statistics were used to analyze quantitative data and qualitative data was analyzed by thematic content analysis. Results: Findings from this study indicate that the COVID-19 pandemic restrictions had an adverse impact on clients seeking VAW services based on VAW staff perspectives. Program staff reported a perceived increase in abuse among women, especially in emotional and financial abuse and experiences of isolation and trauma. Findings further highlight the challenges women experienced when trying to access services in general as well as counseling and legal services. This was perceived to be more prominent among newcomers and marginalized women. The study also revealed client and staff challenges when participating in virtual counseling, their innovations and clients’ creativity in accessing needed counseling and how staff over time adapted to providing virtual support during the pandemic. Conclusion and Next Steps: This study builds upon existing evidence on the impact of COVID-19 restrictions on VAW and may inform future research to better understand the association between the COVID-19 pandemic restrictions and VAW on a broader scale and to inform and support possible short-term and long-term changes in the client experience and counselling practice.

Keywords: COVID-19, pandemic, virtual, violence against women (VAW)

Procedia PDF Downloads 176
383 CD97 and Its Role in Glioblastoma Stem Cell Self-Renewal

Authors: Niklas Ravn-Boess, Nainita Bhowmick, Takamitsu Hattori, Shohei Koide, Christopher Park, Dimitris Placantonakis

Abstract:

Background: Glioblastoma (GBM) is the most common and deadly primary brain malignancy in adults. Tumor propagation, brain invasion, and resistance to therapy critically depend on GBM stem-like cells (GSCs); however, the mechanisms that regulate GSC self-renewal are incompletely understood. Given the aggressiveness and poor prognosis of GBM, it is imperative to find biomarkers that could also translate into novel drug targets. Along these lines, we have identified a cell surface antigen, CD97 (ADGRE5), an adhesion G protein-coupled receptor (GPCR), that is expressed on GBM cells but is absent from non-neoplastic brain tissue. CD97 has been shown to promote invasiveness, angiogenesis, and migration in several human cancers, but its frequency of expression and functional role in regulating GBM growth and survival, and its potential as a therapeutic target has not been investigated. Design: We assessed CD97 mRNA and protein expression in patient derived GBM samples and cell lines using publicly available RNA-sequencing datasets and flow cytometry, respectively. To assess CD97 function, we generated shRNA lentiviral constructs that target a sequence in the CD97 extracellular domain (ECD). A scrambled shRNA (scr) with no predicted targets in the genome was used as a control. We evaluated CD97 shRNA lentivirally transduced GBM cells for Ki67, Annexin V, and DAPI. We also tested CD97 KD cells for their ability to self-renew using clonogenic tumorsphere formation assays. Further, we utilized synthetic Abs (sAbs) generated against the ECD of CD97 to test for potential antitumor effects using patient-derived GBM cell lines. Results: CD97 mRNA expression was expressed at high levels in all GBM samples available in the TCGA cohort. We found high levels of surface CD97 protein expression in 6/6 patient-derived GBM cell cultures, but not human neural stem cells. Flow cytometry confirmed downregulation of CD97 in CD97 shRNA lentivirally transduced cells. CD97 KD induced a significant reduction in cell growth in 3 independent GBM cell lines representing mesenchymal and proneural subtypes, which was accompanied by reduced (~20%) Ki67 staining and increased (~30%) apoptosis. Incubation of GBM cells with sAbs (20 ug/ ml) against the ECD of CD97 for 3 days induced GSC differentiation, as determined by the expression of GFAP and Tubulin. Using three unique GBM patient derived cultures, we found that CD97 KD attenuated the ability of GBM cells to initiate sphere formation by over 300 fold, consistent with an impairment in GSC self-renewal. Conclusion: Loss of CD97 expression in patient-derived GBM cells markedly decreases proliferation, induces cell death, and reduces tumorsphere formation. sAbs against the ECD of CD97 reduce tumorsphere formation, recapitulating the phenotype of CD97 KD, suggesting that sAbs that inhibit CD97 function exhibit anti-tumor activity. Collectively, these findings indicate that CD97 is necessary for the proliferation and survival of human GBM cells and identify CD97 as a promising therapeutically targetable vulnerability in GBM.

Keywords: adhesion GPCR, CD97, GBM stem cell, glioblastoma

Procedia PDF Downloads 110
382 Effect of Proteoliposome Concentration on Salt Rejection Rate of Polysulfone Membrane Prepared by Incorporation of Escherichia coli and Halomonas elongata Aquaporins

Authors: Aysenur Ozturk, Aysen Yildiz, Hilal Yilmaz, Pinar Ergenekon, Melek Ozkan

Abstract:

Water scarcity is one of the most important environmental problems of the World today. Desalination process is regarded as a promising solution to solve drinking water problem of the countries facing with water shortages. Reverse osmosis membranes are widely used for desalination processes. Nano structured biomimetic membrane production is one of the most challenging research subject for improving water filtration efficiency of the membranes and for reducing the cost of desalination processes. There are several researches in the literature on the development of novel biomimetic nanofiltration membranes by incorporation of aquaporin Z molecules. Aquaporins are cell membrane proteins that allow the passage of water molecules and reject all other dissolved solutes. They are present in cell membranes of most of the living organisms and provide high water passage capacity. In this study, GST (Glutathione S-transferas) tagged E. coli aquaporinZ and H. elongate aquaporin proteins, which were previously cloned and characterized, were purified from E. coli BL21 cells and used for fabrication of modified Polysulphone Membrane (PS). Aquaporins were incorporated on the surface of the membrane by using 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) phospolipids as carrier liposomes. Aquaporin containing proteoliposomes were immobilized on the surface of the membrane with m-phenylene-diamine (MPD) and trimesoyl chloride (TMC) rejection layer. Water flux, salt rejection and glucose rejection performances of the thin film composite membranes were tested by using Dead-End Reactor Cell. In this study, effect of proteoliposome concentration, and filtration pressure on water flux and salt rejection rate of membranes were investigated. Type of aquaporin used for membrane fabrication, flux and pressure applied for filtration were found to be important parameters affecting rejection rates. Results suggested that optimization of concentration of aquaporin carriers (proteoliposomes) on the membrane surface is necessary for fabrication of effective composite membranes used for different purposes.

Keywords: aquaporins, biomimmetic membranes, desalination, water treatment

Procedia PDF Downloads 178
381 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 141
380 Elaboration of Ceramic Metal Accident Tolerant Fuels by Additive Manufacturing

Authors: O. Fiquet, P. Lemarignier

Abstract:

Additive manufacturing may find numerous applications in the nuclear industry, for the same reason as for other industries, to enlarge design possibilities and performances and develop fabrication methods as a flexible route for future innovation. Additive Manufacturing applications in the design of structural metallic components for reactors are already developed at a high Technology Readiness Level (TRL). In the case of a Pressured Water Reactor using uranium oxide fuel pellets, which are ceramics, the transposition of already optimized Additive Manufacturing (AM) processes to UO₂ remains a challenge, and the progress remains slow because, to our best knowledge, only a few laboratories have the capability of developing processes applicable to UO₂. After the Fukushima accident, numerous research fields emerged with the study of ATF (Accident tolerant Fuel) fuel concepts, which aimed to improve fuel behaviour. One item concerns the increase of the pellet thermal performance by, for example, the addition of high thermal conductivity material into fissile UO₂. This additive phase may be metallic, and the end product will constitute a CERMET composite. Innovative designs of an internal metallic framework are proposed based on predictive calculations. However, because the well-known reference pellet manufacturing methods impose many limitations, manufacturing such a composite remains an arduous task. Therefore, the AM process appears as a means of broadening the design possibilities of CERMET manufacturing. If the external form remains a standard cylindrical fuel pellet, the internal metallic design remains to be optimized based on process capabilities. This project also considers the limitation to a maximum of 10% volume of metal, which is a constraint neutron physics considerations impose. The AM technique chosen for this development is robocasting because of its simplicity and low-cost equipment. It remains, however, a challenge to adapt a ceramic 3D printing process for the fabrication of UO₂ fuel. The investigation starts with surrogate material, and the optimization of slurry feedstock is based on alumina. The paper will present the first printing of Al2O3-Mo CERMET and the expected transition from ceramic-based alumina to UO₂ CERMET.

Keywords: nuclear, fuel, CERMET, robocasting

Procedia PDF Downloads 44
379 A Good Start for Digital Transformation of the Companies: A Literature and Experience-Based Predefined Roadmap

Authors: Batuhan Kocaoglu

Abstract:

Nowadays digital transformation is a hot topic both in service and production business. For the companies who want to stay alive in the following years, they should change how they do their business. Industry leaders started to improve their ERP (Enterprise Resource Planning) like backbone technologies to digital advances such as analytics, mobility, sensor-embedded smart devices, AI (Artificial Intelligence) and more. Selecting the appropriate technology for the related business problem also is a hot topic. Besides this, to operate in the modern environment and fulfill rapidly changing customer expectations, a digital transformation of the business is required and change the way the business runs, affect how they do their business. Even the digital transformation term is trendy the literature is limited and covers just the philosophy instead of a solid implementation plan. Current studies urge firms to start their digital transformation, but few tell us how to do. The huge investments scare companies with blur definitions and concepts. The aim of this paper to solidify the steps of the digital transformation and offer a roadmap for the companies and academicians. The proposed roadmap is developed based upon insights from the literature review, semi-structured interviews, and expert views to explore and identify crucial steps. We introduced our roadmap in the form of 8 main steps: Awareness; Planning; Operations; Implementation; Go-live; Optimization; Autonomation; Business Transformation; including a total of 11 sub-steps with examples. This study also emphasizes four dimensions of the digital transformation mainly: Readiness assessment; Building organizational infrastructure; Building technical infrastructure; Maturity assessment. Finally, roadmap corresponds the steps with three main terms used in digital transformation literacy as Digitization; Digitalization; and Digital Transformation. The resulted model shows that 'business process' and 'organizational issues' should be resolved before technology decisions and 'digitization'. Companies can start their journey with the solid steps, using the proposed roadmap to increase the success of their project implementation. Our roadmap is also adaptable for relevant Industry 4.0 and enterprise application projects. This roadmap will be useful for companies to persuade their top management for investments. Our results can be used as a baseline for further researches related to readiness assessment and maturity assessment studies.

Keywords: digital transformation, digital business, ERP, roadmap

Procedia PDF Downloads 144
378 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease

Authors: Vaishali Patil, Neeraj Masand

Abstract:

In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.

Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR

Procedia PDF Downloads 232
377 The Effect of Different Concentrations of Extracting Solvent on the Polyphenolic Content and Antioxidant Activity of Gynura procumbens Leaves

Authors: Kam Wen Hang, Tan Kee Teng, Huang Poh Ching, Chia Kai Xiang, H. V. Annegowda, H. S. Naveen Kumar

Abstract:

Gynura procumbens (G. procumbens) leaves, commonly known as ‘sambung nyawa’ in Malaysia is a well-known medicinal plant commonly used as folk medicines in controlling blood glucose, cholesterol level as well as treating cancer. These medicinal properties were believed to be related to the polyphenolic content present in G. procumbens extract, therefore optimization of its extraction process is vital to obtain highest possible antioxidant activities. The current study was conducted to investigate the effect of different concentrations of extracting solvent (ethanol) on the amount of polyphenolic content and antioxidant activities of G. procumbens leaf extract. The concentrations of ethanol used were 30-70%, with the temperature and time kept constant at 50°C and 30 minutes, respectively using ultrasound-assisted extraction. The polyphenolic content of these extracts were quantified by Folin-Ciocalteu colorimetric method and results were expressed as milligram gallic acid equivalent (mg GAE)/g. Phosphomolybdenum method and 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging assays were used to investigate the antioxidant properties of the extract and the results were expressed as milligram ascorbic acid equivalent (mg AAE)/g and effective concentration (EC50) respectively. Among the three different (30%, 50% and 70%) concentrations of ethanol studied, the 50% ethanolic extract showed total phenolic content of 31.565 ± 0.344 mg GAE/g and total antioxidant activity of 78.839 ± 0.199 mg AAE/g while 30% ethanolic extract showed 29.214 ± 0.645 mg GAE/g and 70.701 ± 1.394 mg AAE/g, respectively. With respect to DPPH radical scavenging assay, 50% ethanolic extract had exhibited slightly lower EC50 (314.3 ± 4.0 μg/ml) values compared to 30% ethanol extract (340.4 ± 5.3 μg/ml). Out of all the tested extracts, 70% ethanolic extract exhibited significantly (p< 0.05) highest total phenolic content (38.000 ± 1.009 mg GAE/g), total antioxidant capacity (95.874 ± 2.422 mg AAE/g) and demonstrated the lowest EC50 in DPPH assay (244.2 ± 5.9 μg/ml). An excellent correlations were drawn between total phenolic content, total antioxidant capacity and DPPH radical scavenging activity (R2 = 0.949 and R2 = 0.978, respectively). It was concluded from this study that, 70% ethanol should be used as the optimal polarity solvent to obtain G. procumbens leaf extract with maximum polyphenolic content with antioxidant properties.

Keywords: antioxidant activity, DPPH assay, Gynura procumbens, phenolic compounds

Procedia PDF Downloads 384
376 Challenging Convections: Rethinking Literature Review Beyond Citations

Authors: Hassan Younis

Abstract:

Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.

Keywords: supply chain management, sustainability, framework, model

Procedia PDF Downloads 21
375 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.

Keywords: psychometric, parametric, domains, grids, therapists

Procedia PDF Downloads 252
374 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 23
373 Seismic Active Earth Pressure on Retaining Walls with Reinforced Backfill

Authors: Jagdish Prasad Sahoo

Abstract:

The increase in active earth pressure during the event of an earthquake results sliding, overturning and tilting of earth retaining structures. In order to improve upon the stability of structures, the soil mass is often reinforced with various types of reinforcements such as metal strips, geotextiles, and geogrids etc. The stresses generated in the soil mass are transferred to the reinforcements through the interface friction between the earth and the reinforcement, which in turn reduces the lateral earth pressure on the retaining walls. Hence, the evaluation of earth pressure in the presence of seismic forces with an inclusion of reinforcements is important for the design retaining walls in the seismically active zones. In the present analysis, the effect of reinforcing horizontal layers of reinforcements in the form of sheets (Geotextiles and Geogrids) in sand used as backfill, on reducing the active earth pressure due to earthquake body forces has been studied. For carrying out the analysis, pseudo-static approach has been adopted by employing upper bound theorem of limit analysis in combination with finite elements and linear optimization. The computations have been performed with and out reinforcements for different internal friction angle of sand varying from 30 ° to 45 °. The effectiveness of the reinforcement in reducing the active earth pressure on the retaining walls is examined in terms of active earth pressure coefficient for presenting the solutions in a non-dimensional form. The active earth pressure coefficient is expressed as functions of internal friction angle of sand, interface friction angle between sand and reinforcement, soil-wall interface roughness conditions, and coefficient of horizontal seismic acceleration. It has been found that (i) there always exists a certain optimum depth of the reinforcement layers corresponding to which the value of active earth pressure coefficient becomes always the minimum, and (ii) the active earth pressure coefficient decreases significantly with an increase in length of reinforcements only up to a certain length beyond which a further increase in length hardly causes any reduction in the values active earth pressure. The optimum depth of the reinforcement layers and the required length of reinforcements corresponding to the optimum depth of reinforcements have been established. The numerical results developed in this analysis are expected to be useful for purpose of design of retaining walls.

Keywords: active, finite elements, limit analysis, presudo-static, reinforcement

Procedia PDF Downloads 347
372 Electrical Transport through a Large-Area Self-Assembled Monolayer of Molecules Coupled with Graphene for Scalable Electronic Applications

Authors: Chunyang Miao, Bingxin Li, Shanglong Ning, Christopher J. B. Ford

Abstract:

While it is challenging to fabricate electronic devices close to atomic dimensions in conventional top-down lithography, molecular electronics is promising to help maintain the exponential increase in component densities via using molecular building blocks to fabricate electronic components from the bottom up. It offers smaller, faster, and more energy-efficient electronic and photonic systems. A self-assembled monolayer (SAM) of molecules is a layer of molecules that self-assembles on a substrate. They are mechanically flexible, optically transparent, low-cost, and easy to fabricate. A large-area multi-layer structure has been designed and investigated by the team, where a SAM of designed molecules is sandwiched between graphene and gold electrodes. Each molecule can act as a quantum dot, with all molecules conducting in parallel. When a source-drain bias is applied, significant current flows only if a molecular orbital (HOMO or LUMO) lies within the source-drain energy window. If electrons tunnel sequentially on and off the molecule, the charge on the molecule is well-defined and the finite charging energy causes Coulomb blockade of transport until the molecular orbital comes within the energy window. This produces ‘Coulomb diamonds’ in the conductance vs source-drain and gate voltages. For different tunnel barriers at either end of the molecule, it is harder for electrons to tunnel out of the dot than in (or vice versa), resulting in the accumulation of two or more charges and a ‘Coulomb staircase’ in the current vs voltage. This nanostructure exhibits highly reproducible Coulomb-staircase patterns, together with additional oscillations, which are believed to be attributed to molecular vibrations. Molecules are more isolated than semiconductor dots, and so have a discrete phonon spectrum. When tunnelling into or out of a molecule, one or more vibronic states can be excited in the molecule, providing additional transport channels and resulting in additional peaks in the conductance. For useful molecular electronic devices, achieving the optimum orbital alignment of molecules to the Fermi energy in the leads is essential. To explore it, a drop of ionic liquid is employed on top of the graphene to establish an electric field at the graphene, which screens poorly, gating the molecules underneath. Results for various molecules with different alignments of Fermi energy to HOMO have shown highly reproducible Coulomb-diamond patterns, which agree reasonably with DFT calculations. In summary, this large-area SAM molecular junction is a promising candidate for future electronic circuits. (1) The small size (1-10nm) of the molecules and good flexibility of the SAM lead to the scalable assembly of ultra-high densities of functional molecules, with advantages in cost, efficiency, and power dissipation. (2) The contacting technique using graphene enables mass fabrication. (3) Its well-observed Coulomb blockade behaviour, narrow molecular resonances, and well-resolved vibronic states offer good tuneability for various functionalities, such as switches, thermoelectric generators, and memristors, etc.

Keywords: molecular electronics, Coulomb blokade, electron-phonon coupling, self-assembled monolayer

Procedia PDF Downloads 40
371 Teaching Children about Their Brains: Evaluating the Role of Neuroscience Undergraduates in Primary School Education

Authors: Clea Southall

Abstract:

Many children leave primary school having formed preconceptions about their relationship with science. Thus, primary school represents a critical window for stimulating scientific interest in younger children. Engagement relies on the provision of hands-on activities coupled with an ability to capture a child’s innate curiosity. This requires children to perceive science topics as interesting and relevant to their everyday life. Teachers and pupils alike have suggested the school curriculum be tailored to help stimulate scientific interest. Young children are naturally inquisitive about the human body; the brain is one topic which frequently engages pupils, although it is not currently included in the UK primary curriculum. Teaching children about the brain could have wider societal impacts such as increasing knowledge of neurological disorders. However, many primary school teachers do not receive formal neuroscience training and may feel apprehensive about delivering lessons on the nervous system. This is exacerbated by a lack of educational neuroscience resources. One solution is for undergraduates to form partnerships with schools - delivering engaging lessons and supplementing teacher knowledge. The aim of this project was to evaluate the success of a short lesson on the brain delivered by an undergraduate neuroscientist to primary school pupils. Prior to entering schools, semi-structured online interviews were conducted with teachers to gain pedagogical advice and relevant websites were searched for neuroscience resources. Subsequently, a single lesson plan was created comprising of four hands-on activities. The activities were devised in a top-down manner, beginning with learning about the brain as an entity, before focusing on individual neurons. Students were asked to label a ‘brain map’ to assess prior knowledge of brain structure and function. They viewed animal brains and created ‘pipe-cleaner neurons’ which were later used to depict electrical transmission. The same session was delivered by an undergraduate student to 570 key stage 2 (KS2) pupils across five schools in Leeds, UK. Post-session surveys, designed for teachers and pupils respectively, were used to evaluate the session. Children in all year groups had relatively poor knowledge of brain structure and function at the beginning of the session. When asked to label four brain regions with their respective functions, older pupils labeled a mean of 1.5 (± 1.0) brain regions compared to 0.8 (± 0.96) for younger pupils (p=0.002). However, by the end of the session, 95% of pupils felt their knowledge of the brain had increased. Hands-on activities were rated most popular by pupils and were considered the most successful aspect of the session by teachers. Although only half the teachers were aware of neuroscience educational resources, nearly all (95%) felt they would have more confidence in teaching a similar session in the future. All teachers felt the session was engaging and that the content could be linked to the current curriculum. Thus, a short fifty-minute session can successfully enhance pupils’ knowledge of a new topic: the brain. Partnerships with an undergraduate student can provide an alternative method for supplementing teacher knowledge, increasing their confidence in delivering future lessons on the nervous system.

Keywords: education, neuroscience, primary school, undergraduate

Procedia PDF Downloads 189
370 Adsorptive Media Selection for Bilirubin Removal: An Adsorption Equilibrium Study

Authors: Vincenzo Piemonte

Abstract:

The liver is a complex, large-scale biochemical reactor which plays a unique role in the human physiology. When liver ceases to perform its physiological activity, a functional replacement is required. Actually, liver transplantation is the only clinically effective method of treating severe liver disease. Anyway, the aforementioned therapeutic approach is hampered by the disparity between organ availability and the number of patients on the waiting list. In order to overcome this critical issue, research activities focused on liver support device systems (LSDs) designed to bridging patients to transplantation or to keep them alive until the recovery of native liver function. In recirculating albumin dialysis devices, such as MARS (Molecular Adsorbed Recirculating System), adsorption is one of the fundamental steps in albumin-dialysate regeneration. Among the albumin-bound toxins that must be removed from blood during liver-failure therapy, bilirubin and tryptophan can be considered as representative of two different toxin classes. The first one, not water soluble at physiological blood pH and strongly bounded to albumin, the second one, loosely albumin bound and partially water soluble at pH 7.4. Fixed bed units are normally used for this task, and the design of such units requires information both on toxin adsorption equilibrium and kinetics. The most common adsorptive media used in LSDs are activated carbon, non-ionic polymeric resins and anionic resins. In this paper, bilirubin adsorption isotherms on different adsorptive media, such as polymeric resin, albumin-coated resin, anionic resin, activated carbon and alginate beads with entrapped albumin are presented. By comparing all the results, it can be stated that the adsorption capacity for bilirubin of the five different media increases in the following order: Alginate beads < Polymeric resin < Albumin-coated resin < Activated carbon < Anionic resin. The main focus of this paper is to provide useful guidelines for the optimization of liver support devices which implement adsorption columns to remove albumin-bound toxins from albumin dialysate solutions.

Keywords: adsorptive media, adsorption equilibrium, artificial liver devices, bilirubin, mathematical modelling

Procedia PDF Downloads 243
369 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters

Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu

Abstract:

Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).

Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs

Procedia PDF Downloads 170
368 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View

Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol

Abstract:

Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.

Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties

Procedia PDF Downloads 267