Search results for: Australian Standard
103 Analyzing the Effectiveness of Elderly Design and the Impact on Sustainable Built Environment
Authors: Tristance Kee
Abstract:
With an unprecedented increase in elderly population around the world, the severe lack of quality housing and health-and-safety provisions to serve this cohort cannot be ignored any longer. Many elderly citizens, especially singletons, live in unsafe housing conditions with poorly executed planning and design. Some suffer from deteriorating mobility, sight and general alertness and their sub-standard living conditions further hinder their daily existence. This research explains how concepts such as Universal Design and Co-Design operate in a high density city such as Hong Kong, China where innovative design can become an alternative solution where government and the private sector fail to provide quality elderly friendly facilities to promote a sustainable urban development. Unlike other elderly research which focuses more on housing policies, nursing care and theories, this research takes a more progressive approach by providing an in-depth impact assessment on how innovative design can be practical solutions for creating a more sustainable built environment. The research objectives are to: 1) explain the relationship between innovative design for elderly and a healthier and sustainable environment; 2) evaluate the impact of human ergonomics with the use of universal design; and 3) explain how innovation can enhance the sustainability of a city in improving citizen’s sight, sound, walkability and safety within the ageing population. The research adopts both qualitative and quantitative methodologies to examine ways to improve elderly population’s relationship to our built environment. In particular, the research utilizes collected data from questionnaire survey and focus group discussions to obtain inputs from various stakeholders, including designers, operators and managers related to public housing, community facilities and overall urban development. In addition to feedbacks from end-users and stakeholders, a thorough analysis on existing elderly housing facilities and Universal Design provisions are examined to evaluate their adequacy. To echo the theme of this conference on Innovation and Sustainable Development, this research examines the effectiveness of innovative design in a risk-benefit factor assessment. To test the hypothesis that innovation can cater for a sustainable development, the research evaluated the health improvement of a sample size of 150 elderly in a period of eight months. Their health performances, including mobility, speech and memory are monitored and recorded on a regular basis to assess if the use of innovation does trigger impact on improving health and home safety for an elderly cohort. This study was supported by district community centers under the auspices of Home Affairs Bureau to provide respondents for questionnaire survey, a standardized evaluation mechanism, and professional health care staff for evaluating the performance impact. The research findings will be integrated to formulate design solutions such as innovative home products to improve elderly daily experience and safety with a particular focus on the enhancement on sight, sound and mobility safety. Some policy recommendations and architectural planning recommendations related to Universal Design will also be incorporated into the research output for future planning of elderly housing and amenity provisions.Keywords: elderly population, innovative design, sustainable built environment, universal design
Procedia PDF Downloads 230102 Azolla Pinnata as Promising Source for Animal Feed in India: An Experimental Study to Evaluate the Nutrient Enhancement Result of Feed
Authors: Roshni Raha, Karthikeyan S.
Abstract:
The world's largest livestock population resides in India. Existing strategies must be modified to increase the production of livestock and their by-products in order to meet the demands of the growing human population. Even though India leads the world in both milk production and the number of cows, average production is not very healthy and productive. This may be due to the animals' poor nutrition caused by a chronic under-availability of high-quality fodder and feed. This article explores Azolla pinnata to be a promising source to produce high-quality unconventional feed and fodder for effective livestock production and good quality breeding in India. This article is an exploratory study using a literature survey and experimentation analysis. In the realm of agri-biotechnology, azolla sp gained attention for helping farmers achieve sustainability, having minimal land requirements, and serving as a feed element that doesn't compete with human food sources. It has high methionine content, which is a good source of protein. It can be easily digested as the lignin content is low. It has high antioxidants and vitamins like beta carotene, vitamin A, and vitamin B12. Using this concept, the paper aims to investigate and develop a model of using azolla plants as a novel, high-potential feed source to combat the problems of low production and poor quality of animals in India. A representative sample of animal feed is collected where azolla is added. The sample is ground into a fine powder using mortar. PITC (phenylisothiocyanate) is added to derivatize the amino acids. The sample is analyzed using HPLC (High-Performance Liquid Chromatography) to measure the amino acids and monitor the protein content of the sample feed. The amino acid measurements from HPLC are converted to milligrams per gram of protein using the method of amino acid profiling via a set of calculations. The amino acid profile data is then obtained to validate the proximate results of nutrient enhancement of the composition of azolla in the sample. Based on the proximate composition of azolla meal, the enhancement results shown were higher compared to the standard values of normal fodder supplements indicating the feed to be much richer and denser in nutrient supply. Thus azolla fed sample proved to be a promising source for animal fodder. This would in turn lead to higher production and a good breed of animals that would help to meet the economic demands of the growing Indian population. Azolla plants have no side effects and can be considered as safe and effective to be immersed in the animal feed. One area of future research could begin with the upstream scaling strategy of azolla plants in India. This could involve introducing several bioreactor types for its commercial production. Since azolla sp has been proved in this paper as a promising source for high quality animal feed and fodder, large scale production of azolla plants will help to make the process much quicker, more efficient and easily accessible. Labor expenses will also be reduced by employing bioreactors for large-scale manufacturing.Keywords: azolla, fodder, nutrient, protein
Procedia PDF Downloads 55101 An Investigation on the Suitability of Dual Ion Beam Sputtered GMZO Thin Films: For All Sputtered Buffer-Less Solar Cells
Authors: Vivek Garg, Brajendra S. Sengar, Gaurav Siddharth, Nisheka Anadkat, Amitesh Kumar, Shailendra Kumar, Shaibal Mukherjee
Abstract:
CuInGaSe (CIGSe) is the dominant thin film solar cell technology. The band alignment of Buffer/CIGSe interface is one of the most crucial parameters for solar cell performance. In this article, the valence band offset (VBOff) and conduction band offset (CBOff) values of Cu(In0.70Ga0.30)Se/ 1 at.% Ga: Mg0.25Zn0.75O (GMZO) heterojunction, grown by dual ion beam sputtering system (DIBS), are calculated to understand the carrier transport mechanism at the heterojunction for the realization of all sputtered buffer-less solar cells. To determine the valence band offset (VBOff), ∆E_V at GMZO/CIGSe heterojunction interface, the standard method based on core-level photoemission is utilized. The value of ∆E_V can be evaluated by considering common core-level peaks. In our study, the values of (Valence band onset)VBOn, obtained by linear extrapolation method for GMZO and CIGSe films are calculated to be 2.86 and 0.76 eV. In the UPS spectra peak positions of Se 3d is observed in UPS spectra at 54.82 and 54.7 eV for CIGSe film and GMZO/CIGSe interface respectively, while the peak position of Mg 2p is observed at 50.09 and 50.12 eV for GMZO and GMZO/CIGSe interface respectively. The optical band gap of CIGSe and GMZO are obtained from absorption spectra procured from spectroscopic ellipsometry are 1.26 and 3.84 eV respectively. The calculated average values of ∆E_v and ∆E_C are estimated to be 2.37 and 0.21 eV, respectively, at room temperature. The calculated positive conduction band offset termed as a spike at the absorber junction is the required criterion for the high-efficiency solar cells for the efficient charge extraction from the junction. So we can conclude that the above study confirms GMZO thin films grown by the dual ion beam sputtering system are the suitable candidate for the CIGSe thin films based ultra-thin buffer-less solar cells. We investigated the band-offset properties at the GMZO/CIGSe heterojunction to verify the suitability of the GMZO for the realization of the buffer-less solar cells. The calculated average values of ∆E_V and ∆E_C are estimated to be 2.37 and 0.21 eV, respectively, at room temperature. The calculated positive conduction band offset termed as a spike at the absorber junction is the required criterion for the high-efficiency solar cells for the efficient charge extraction from the junction. So we can conclude that the above study confirms GMZO thin films grown by the dual ion beam sputtering system are the suitable candidate for the CIGSe thin films based ultra-thin buffer-less solar cells. Acknowledgment: We are thankful to DIBS, EDX, and XRD facility equipped at Sophisticated Instrument Centre (SIC) at IIT Indore. The authors B.S.S and A.K acknowledge CSIR and V.G acknowledge UGC, India for their fellowships. B.S.S is thankful to DST and IUSSTF for BASE Internship Award. Prof. Shaibal Mukherjee is thankful to DST and IUSSTF for BASE Fellowship and MEITY YFRF award. This work is partially supported by DAE BRNS, DST CERI, and DST-RFBR Project under India-Russia Programme of Cooperation in Science and Technology. We are thankful to Mukul Gupta for SIMS facility equipped at UGC-DAE Indore.Keywords: CIGSe, DIBS, GMZO, solar cells, UPS
Procedia PDF Downloads 279100 Optimizing Heavy-Duty Green Hydrogen Refueling Stations: A Techno-Economic Analysis of Turbo-Expander Integration
Authors: Christelle Rabbat, Carole Vouebou, Sary Awad, Alan Jean-Marie
Abstract:
Hydrogen has been proven to be a viable alternative to standard fuels as it is easy to produce and only generates water vapour and zero carbon emissions. However, despite the hydrogen benefits, the widespread adoption of hydrogen fuel cell vehicles and internal combustion engine vehicles is impeded by several challenges. The lack of refueling infrastructures remains one of the main hindering factors due to the high costs associated with their design, construction, and operation. Besides, the lack of hydrogen vehicles on the road diminishes the economic viability of investing in refueling infrastructure. Simultaneously, the absence of accessible refueling stations discourages consumers from adopting hydrogen vehicles, perpetuating a cycle of limited market uptake. To address these challenges, the implementation of adequate policies incentivizing the use of hydrogen vehicles and the reduction of the investment and operation costs of hydrogen refueling stations (HRS) are essential to put both investors and customers at ease. Even though the transition to hydrogen cars has been rather slow, public transportation companies have shown a keen interest in this highly promising fuel. Besides, their hydrogen demand is easier to predict and regulate than personal vehicles. Due to the reduced complexity of designing a suitable hydrogen supply chain for public vehicles, this sub-sector could be a great starting point to facilitate the adoption of hydrogen vehicles. Consequently, this study will focus on designing a chain of on-site green HRS for the public transportation network in Nantes Metropole leveraging the latest relevant technological advances aiming to reduce the costs while ensuring reliability, safety, and ease of access. To reduce the cost of HRS and encourage their widespread adoption, a network of 7 H35-T40 HRS has been designed, replacing the conventional J-T valves with turbo-expanders. Each station in the network has a daily capacity of 1,920 kg. Thus, the HRS network can produce up to 12.5 tH2 per day. The detailed cost analysis has revealed a CAPEX per station of 16.6 M euros leading to a network CAPEX of 116.2 M euros. The proposed station siting prioritized Nantes metropole’s 5 bus depots and included 2 city-centre locations. Thanks to the turbo-expander technology, the cooling capacity of the proposed HRS is 19% lower than that of a conventional station equipped with J-T valves, resulting in significant CAPEX savings estimated at 708,560 € per station, thus nearly 5 million euros for the whole HRS network. Besides, the turbo-expander power generation ranges from 7.7 to 112 kW. Thus, the power produced can be used within the station or sold as electricity to the main grid, which would, in turn, maximize the station’s profit. Despite the substantial initial investment required, the environmental benefits, cost savings, and energy efficiencies realized through the transition to hydrogen fuel cell buses and the deployment of HRS equipped with turbo-expanders offer considerable advantages for both TAN and Nantes Metropole. These initiatives underscore their enduring commitment to fostering green mobility and combatting climate change in the long term.Keywords: green hydrogen, refueling stations, turbo-expander, heavy-duty vehicles
Procedia PDF Downloads 5899 Intercultural Initiatives and Canadian Bilingualism
Authors: Muna Shafiq
Abstract:
Growth in international immigration is a reflection of increased migration patterns in Canada and in other parts of the world. Canada continues to promote itself as a bilingual country, yet the bilingual French and English population numbers do not reflect this platform. Each province’s integration policies focus only on second language learning of either English or French. Moreover, since English Canadians outnumber French Canadians, maintaining, much less increasing, English-French bilingualism appears unrealistic. One solution to increasing Canadian bilingualism requires creating intercultural communication initiatives between youth in Quebec and the rest of Canada. Specifically, the focus is on active, experiential learning, where intercultural competencies develop outside traditional classroom settings. The target groups are Generation Y Millennials and Generation Z Linksters, the next generations in the career and parenthood lines. Today, Canada’s education system, like many others, must continually renegotiate lines between programs it offers its immigrant and native communities. While some purists or right-wing nationalists would disagree, the survival of bilingualism in Canada has little to do with reducing immigration. Children and youth immigrants play a valuable role in increasing Canada’s French and English speaking communities. For instance, a focus on more immersion, over core French education programs for immigrant children and youth would not only increase bilingual rates; it would develop meaningful intercultural attachments between Canadians. Moreover, a vigilant increase of funding in French immersion programs is critical, as are new initiatives that focus on experiential language learning for students in French and English language programs. A favorable argument supports the premise that other than French-speaking students in Québec and elsewhere in Canada, second and third generation immigrant students are excellent ambassadors to promote bilingualism in Canada. Most already speak another language at home and understand the value of speaking more than one language in their adopted communities. Their dialogue and participation in experiential language exchange workshops are necessary. If the proposed exchanges take place inter-provincially, the momentum to increase collective regional voices increases. This regional collectivity can unite Canadians differently than nation-targeted initiatives. The results from an experiential youth exchange organized in 2017 between students at the crossroads of Generation Y and Generation Z in Vancouver and Quebec City respectively offer a promising starting point in assessing the strength of bringing together different regional voices to promote bilingualism. Code-switching between standard, international French Vancouver students, learn in the classroom versus more regional forms of Quebec French spoken locally created regional connectivity between students. The exchange was equally rewarding for both groups. Increasing their appreciation for each other’s regional differences allowed them to contribute actively to their social and emotional development. Within a sociolinguistic frame, this proposed model of experiential learning does not focus on hands-on work experience. However, the benefits of such exchanges are as valuable as work experience initiatives developed in experiential education. Students who actively code switch between French and English in real, not simulated contexts appreciate bilingualism more meaningfully and experience its value in concrete terms.Keywords: experiential learning, intercultural communication, social and emotional learning, sociolinguistic code-switching
Procedia PDF Downloads 13998 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption
Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy
Abstract:
The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability
Procedia PDF Downloads 7797 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy
Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani
Abstract:
Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis
Procedia PDF Downloads 15196 Photobleaching Kinetics and Epithelial Distribution of Hexylaminoleuilinate Induced PpIX in Rat Bladder Cancer
Authors: Sami El Khatib, Agnès Leroux, Jean-Louis Merlin, François Guillemin, Marie-Ange D’Hallewin
Abstract:
Photodynamic therapy (PDT) is a treatment modality based on the cytotoxic effect occurring on the target tissues by interaction of a photosensitizer with light in the presence of oxygen. One of the major advances in PDT can be attributed to the use of topical aminolevulinic (ALA) to induce Protoporphyrin IX (PpIX) for the treatment of early stage cancers as well as diagnosis. ALA is a precursor of the heme synthesis pathway. Locally delivered to the target tissue ALA overcomes the negative feedback exerted by heme and promotes the transient formation of PpIX in situ to reach critical effective levels in cells and tissue. Whereas early steps of the heme pathway occur in the cytosol, PpIX synthesis is shown to be held in the mitochondrial membranes and PpIX fluorescence is expected to accumulate in close vicinity of the initial building site and to progressively diffuse to the neighboring cytoplasmic compartment or other lipophylic organelles. PpIX is known to be highly reactive and will be degraded when irradiated with light. PpIX photobleaching is believed to be governed by a singlet oxygen mediated mechanism in the presence of oxidized amino acids and proteins. PpIX photobleaching and subsequent spectral phototransformation were described widely in tumor cells incubated in vitro with ALA solution, or ex vivo in human and porcine mucosa superfused with hexylaminolevulinate (hALA). PpIX photobleaching was also studied in vivo, using animal models such as normal or tumor mice skin and orthotopic rat bladder model. Hexyl aminolevulinate a more potent lipophilic derivative of ALA was proposed as an adjunct to standard cystoscopy in the fluorescence diagnosis of bladder cancer and other malignancies. We have previously reported the effectiveness of hALA mediated PDT of rat bladder cancer. Although normal and tumor bladder epithelium exhibit similar fluorescence intensities after intravesical instillation of two hALA concentrations (8 and 16 mM), the therapeutic response at 8mM and 20J/cm2 was completely different from the one observed at 16mM irradiated with the same light dose. Where the tumor is destroyed, leaving the underlying submucosa and muscle intact after an 8 mM instillation, 16mM sensitization and subsequent illumination results in the complete destruction of the underlying bladder wall but leaves the tumor undamaged. The object of the current study is to try to unravel the underlying mechanism for this apparent contradiction. PpIX extraction showed identical amounts of photosensitizer in tumor bearing bladders at both concentrations. Photobleaching experiments revealed mono-exponential decay curves in both situations but with a two times faster decay constant in case of 16mM bladders. Fluorescence microscopy shows an identical fluorescence pattern for normal bladders at both concentrations and tumor bladders at 8mM with bright spots. Tumor bladders at 16 mM exhibit a more diffuse cytoplasmic fluorescence distribution. The different response to PDT with regard to the initial pro-drug concentration can thus be attributed to the different cellular localization.Keywords: bladder cancer, hexyl-aminolevulinate, photobleaching, confocal fluorescence microscopy
Procedia PDF Downloads 40895 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces
Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava
Abstract:
Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection
Procedia PDF Downloads 22994 Education Management and Planning with Manual Based
Authors: Purna Bahadur Lamichhane
Abstract:
Education planning and management are foundational pillars for developing effective educational systems. However, in many educational contexts, especially in developing nations, technology-enabled management is still emerging. In such settings, manual-based systems, where instructions and guidelines are physically documented, remain central to educational planning and management. This paper examines the effectiveness, challenges, and potential of manual-based education planning systems in fostering structured, reliable, and adaptable management frameworks. The objective of this study is to explore how a manual-based approach can successfully guide administrators, educators, and policymakers in delivering high-quality education. By using structured, accessible instructions, this approach serves as a blueprint for educational governance, offering clear, actionable steps to achieve institutional goals. Through an analysis of case studies from various regions, the paper identifies key strategies for planning school schedules, managing resources, and monitoring academic and administrative performance without relying on automated systems. The findings underscore the significance of organized documentation, standard operating procedures, and comprehensive manuals that establish uniformity and maintain educational standards across institutions. With a manual-based approach, management can remain flexible, responsive, and user-friendly, especially in environments where internet access and digital literacy are limited. Moreover, it allows for localization, where instructions can be tailored to the unique cultural and socio-economic contexts of the community, thereby increasing relevancy and ownership among local stakeholders. This paper also highlights several challenges associated with manual-based education management. Manual systems often require significant time and human resources for maintenance and updating, potentially leading to inefficiencies and inconsistencies over time. Furthermore, manual records can be susceptible to loss, damage, and limited accessibility, which may affect decision-making and institutional memory. There is also the risk of siloed information, where crucial data resides with specific individuals rather than being accessible across the organization. However, with proper training and regular oversight, many of these limitations can be mitigated. The study further explores the potential for hybrid approaches, combining manual planning with selected digital tools for record-keeping, reporting, and analytics. This transitional strategy can enable schools and educational institutions to gradually embrace digital solutions without discarding the familiarity and reliability of manual instructions. In conclusion, this paper advocates for a balanced, context-sensitive approach to education planning and management. While digital systems hold the potential to streamline processes, manual-based systems offer resilience, inclusivity, and adaptability for institutions where technology adoption may be constrained. Ultimately, by reinforcing the importance of structured, detailed manuals and instructional guides, educational institutions can build robust management frameworks that facilitate both short-term successes and long-term growth in their educational mission. This research aims to provide a reference for policymakers, educators, and administrators seeking practical, low-cost, and adaptable solutions for sustainable educational planning and management.Keywords: educatoin, planning, management, manual
Procedia PDF Downloads 1993 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach
Authors: Rupesh S. Gundewar, Kanchan C. Khare
Abstract:
Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.Keywords: runoff, urbanization, impermeable response, flooding
Procedia PDF Downloads 25092 Prevalence, Median Time, and Associated Factors with the Likelihood of Initial Antidepressant Change: A Cross-Sectional Study
Authors: Nervana Elbakary, Sami Ouanes, Sadaf Riaz, Oraib Abdallah, Islam Mahran, Noriya Al-Khuzaei, Yassin Eltorki
Abstract:
Major Depressive Disorder (MDD) requires therapeutic interventions during the initial month after being diagnosed for better disease outcomes. International guidelines recommend a duration of 4–12 weeks for an initial antidepressant (IAD) trial at an optimized dose to get a response. If depressive symptoms persist after this duration, guidelines recommend switching, augmenting, or combining strategies as the next step. Most patients with MDD in the mental health setting have been labeled incorrectly as treatment-resistant where in fact they have not been subjected to an adequate trial of guideline-recommended therapy. Premature discontinuation of IAD due to ineffectiveness can cause unfavorable consequences. Avoiding irrational practices such as subtherapeutic doses of IAD, premature switching between the ADs, and refraining from unjustified polypharmacy can help the disease to go into a remission phase We aimed to determine the prevalence and the patterns of strategies applied after an IAD was changed because of a suboptimal response as a primary outcome. Secondary outcomes included the median survival time on IAD before any change; and the predictors that were associated with IAD change. This was a retrospective cross- sectional study conducted in Mental Health Services in Qatar. A dataset between January 1, 2018, and December 31, 2019, was extracted from the electronic health records. Inclusion and exclusion criteria were defined and applied. The sample size was calculated to be at least 379 patients. Descriptive statistics were reported as frequencies and percentages, in addition, to mean and standard deviation. The median time of IAD to any change strategy was calculated using survival analysis. Associated predictors were examined using two unadjusted and adjusted cox regression models. A total of 487 patients met the inclusion criteria of the study. The average age for participants was 39.1 ± 12.3 years. Patients with first experience MDD episode 255 (52%) constituted a major part of our sample comparing to the relapse group 206(42%). About 431 (88%) of the patients had an occurrence of IAD change to any strategy before end of the study. Almost half of the sample (212 (49%); 95% CI [44–53%]) had their IAD changed less than or equal to 30 days. Switching was consistently more common than combination or augmentation at any timepoint. The median time to IAD change was 43 days with 95% CI [33.2–52.7]. Five independent variables (age, bothersome side effects, un-optimization of the dose before any change, comorbid anxiety, first onset episode) were significantly associated with the likelihood of IAD change in the unadjusted analysis. The factors statistically associated with higher hazard of IAD change in the adjusted analysis were: younger age, un-optimization of the IAD dose before any change, and comorbid anxiety. Because almost half of the patients in this study changed their IAD as early as within the first month, efforts to avoid treatment failure are needed to ensure patient-treatment targets are met. The findings of this study can have direct clinical guidance for health care professionals since an optimized, evidence-based use of AD medication can improve the clinical outcomes of patients with MDD; and also, to identify high-risk factors that could worsen the survival time on IAD such as young age and comorbid anxietyKeywords: initial antidepressant, dose optimization, major depressive disorder, comorbid anxiety, combination, augmentation, switching, premature discontinuation
Procedia PDF Downloads 15391 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction
Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman
Abstract:
Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation
Procedia PDF Downloads 9390 In Vitro Intestine Tissue Model to Study the Impact of Plastic Particles
Authors: Ashleigh Williams
Abstract:
Micro- and nanoplastics’ (MNLPs) omnipresence and ecological accumulation is evident when surveying recent environmental impact studies. For example, in 2014 it was estimated that at least 52.3 trillion plastic microparticles are floating at sea, and scientists have even found plastics present remote Arctic ice and snow (5,6). Plastics have even found their way into precipitation, with more than 1000 tons of microplastic rain precipitating onto the Western United States in 2020. Even more recent studies evaluating the chemical safety of reusable plastic bottles found that hundreds of chemicals leached into the control liquid in the bottle (ddH2O, ph = 7) during a 24-hour time period. A consequence of the increased abundance in plastic waste in the air, land, and water every year is the bioaccumulation of MNLPs in ecosystems and trophic niches of the animal food chain, which could potentially cause increased direct and indirect exposure of humans to MNLPs via inhalation, ingestion, and dermal contact. Though the detrimental, toxic effects of MNLPs have been established in marine biota, much less is known about the potentially hazardous health effects of chronic MNLP ingestion in humans. Recent data indicate that long-term exposure to MNLPs could cause possible inflammatory and dysbiotic effects. However, toxicity seems to be largely dose-, as well as size-dependent. In addition, the transcytotic uptake of MNLPs through the intestinal epithelia in humans remain relatively unknown. To this point, the goal of the current study was to investigate the mechanisms of micro- and nanoplastic uptake and transcytosis of Polystyrene (PE) in human stem-cell derived, physiologically relevant in vitro intestinal model systems, and to compare the relative effect of particle size (30 nm, 100 nm, 500 nm and 1 µm), and concentration (0 µg/mL, 250 µg/mL, 500 µg/mL, 1000 µg/mL) on polystyrene MNLP uptake, transcytosis and intestinal epithelial model integrity. Observational and quantitative data obtained from confocal microscopy, immunostaining, transepithelial electrical resistance (TEER) measurements, cryosectioning, and ELISA cytokine assays of the proinflammatory cytokines Interleukin-6 and Interleukin-8 were used to evaluate the localization and transcytosis of polystyrene MNPs and its impact on epithelial integrity in human-derived intestinal in vitro model systems. The effect of Microfold (M) cell induction on polystyrene micro- and nanoparticle (MNP) uptake, transcytosis, and potential inflammation was also assessed and compared to samples grown under standard conditions. Microfold (M) cells, link the human intestinal system to the immune system and are the primary cells in the epithelium responsible for sampling and transporting foreign matter of interest from the lumen of the gut to underlying immune cells. Given the uptake capabilities of Microfold cells to interact both specifically and nonspecific to abiotic and biotic materials, it was expected that M- cell induced in vitro samples would have increased binding, localization, and potentially transcytosis of Polystyrene MNLPs across the epithelial barrier. Experimental results of this study would not only help in the evaluation of the plastic toxicity, but would allow for more detailed modeling of gut inflammation and the intestinal immune system.Keywords: nanoplastics, enteroids, intestinal barrier, tissue engineering, microfold (M) cells
Procedia PDF Downloads 8589 Mesovarial Morphological Changes in Offspring Exposed to Maternal Cold Stress
Authors: Ariunaa.S., Javzandulam E., Chimegsaikhan S., Altantsetseg B., Oyungerel S., Bat-Erdene T., Naranbaatar S., Otgonbayar B., Suvdaa N., Tumenbayar B.
Abstract:
Introduction: Prenatal stress has been linked to heightened allergy sensitivity in offspring. However, there is a notable absence of research on the mesovarium structure of offspring born from mothers subjected to cold stress during pregnancy. Understanding the impact of maternal cold stress on the mesovarium structure could provide valuable insights into reproductive health outcomes in offspring. Objective: This study aims to investigate structural changes in the mesovarium of offspring born from cold-stress affected rats. Material and Methods: 20 female Westar rats weighing around 200g were chosen and evenly divided into four containers; then, 2-3 male rats were introduced to each container. The Papanicolaou method was used to estimate the spermatozoa and estrus period from vaginal swabs taken from female rats at 8:00 a.m. Female rats examined with the presence of spermatozoa during the estrous phase of the estrous cycle are defined as pregnant. Pregnant rats are divided into experimental and control groups. The experimental group was stressed using the model of severe and chronic cold stress for 30 days. They were exposed to cold stress for 3 hours each morning between 8:00 and 11:00 o’clock at a temperature of minus 15 degrees Celsius. The control group was kept under normal laboratory conditions. Newborn female rats from both experimental and control groups were selected. At 2 months of age, rats were euthanized by decapitation, and their mesovaria were collected. Tissues were fixed in 4% formalin, embedded in paraffin, and sectioned into 5μm thick slices. The sections were stained with H&E and digitized by digital microscope. The area of brown fat and inflammatory infiltrations were quantified using Image J software. The blood cortisol levels were measured using ELISA. Data are expressed as the mean ± standard error of the mean (SEM). The Mann-Whitney test was used to compare the two groups. All analyses were performed using Prism (GraphPad Software). A p-value of < 0.05 was considered statistically significant. Result: Offspring born from stressed mothers exhibited significant physiological differences compared to the control group. Specifically, the body weight of offspring from stressed mothers was significantly lower than the control group (p=0.0002). Conversely, the cortisol level in offspring from stressed mothers was significantly higher (p=0.0446). Offspring born from stressed mothers showed a statistically significant increase in brown fat area compared to the control group (p=0.01). Additionally, offspring from stressed mothers had a significantly higher number of inflammatory infiltrates in their mesovarium compared to the control group (p<0.047). These results indicate the profound impact of maternal stress on offspring physiology, affecting body weight, stress hormone levels, metabolic characteristics, and inflammatory responses. Conclusion: Exposure to cold stress during pregnancy has significant repercussions on offspring physiology. Our findings demonstrate that cold stress exposure leads to increased blood cortisol levels, brown fat accumulation, and inflammatory cell infiltration in offspring. These results underscore the profound impact of maternal stress on offspring health and highlight the importance of mitigating environmental stressors during pregnancy to promote optimal offspring outcomes.Keywords: brown fat, cold stress during pregnancy, inflammation, mesovarium
Procedia PDF Downloads 4988 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 12187 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 13986 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma
Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu
Abstract:
Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma
Procedia PDF Downloads 6085 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 14984 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process
Authors: Reyna Singh, David Lokhat, Milan Carsky
Abstract:
The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.Keywords: catalyst, coal, liquefaction, temperature-staged
Procedia PDF Downloads 64883 Isolation and Probiotic Characterization of Lactobacillus plantarum and Lactococcus lactis from Gut Microbiome of Rohu (Labeo rohita)
Authors: Prem Kumar, Anuj Tyagi, Harsh Panwar, Vaneet Inder Kaur
Abstract:
Though aquaculture started as an occupation for poor and weak farmers for livelihood, it has now acquired the shape of one of the biggest industry to grow live protein in the form of aquatic organisms. Industrialization of the aquaculture sector has led to intensification resulting in stress on aquatic organisms and frequent disease outbreaks leading to huge economic impacts. Indiscriminate use of antibiotics as growth promoter and prophylactic agent in aquaculture has resulted in rapid emergence and spread of antibiotic resistance in bacterial pathogens. Over the past few years, use of probiotics (as an alternative of antibiotics) in aquaculture has gained attention due to their immunostimulant and growth promoting properties. It has now well known that after administration, a probiotic bacterium has to compete and establish itself against native microbiota to show its eventual beneficial properties. Due to their non-fish origin, commercial probiotics sometimes may display poor probiotic functionalities and antagonistic effects. Thus, isolation and characterization of probiotic bacteria from same fish host is very much necessary. In this study, attempts were made to isolate potent probiotic lactic acid bacteria (LAB) from intestinal microflora of rohu fish. Twenty-five experimental rohu fishes (mean weight 400 ± 20gm, mean standard length 20 ± 3cm) were used in the study to collect fish gut after dissection in a sterile condition. A total of 150 tentative LAB isolates from selective agar media (de Man-Rogosa-Sharpe (MRS)) were screened for their antimicrobial activity against Aeromonas hydrophila and Microccocus leuteus. A total of 17 isolates, identified as Lactobacillus plantarum and Lactococcus lactis, identified by biochemical tests and PCR amplification and sequencing of 16S rRNA gene fragment, displayed promising antimicrobial activity against both the pathogens. Two isolates from each species (FLB1, FLB2 from L. plantarum; and FLC1, FLC2 from L. lactis) were subjected to downstream probiotic potential characterization. These isolates were compared in vitro for their hemolytic activity, acid and bile tolerance for growth kinetics, auto-aggregation, cell-surface hydrophobicity against xylene, and chloroform, tolerance to phenol, cell adhesion, and safety parameters (by intraperitoneal and intramuscular injections). None of the tested isolates showed any hemolytic activity indicating their potential safety. Moreover, these isolates were tolerant to 0.3% bile (75-82% survival), phenol stress (96-99% survival) with 100% viability at pH 3 over a period of 3 h. Antibiotic sensitivity test revealed that all the tested LAB isolates were resistant to vancomycin, gentamicin, streptomycin, and erythromycin and sensitive to Erythromycin, Chloramphenicol, Ampicillin, Trimethoprim, and Nitrofurantoin. Tetracycline resistance was found in L. plantarum (FLB1 and FLB2 isolates), whereas L. lactis were susceptible to it. Intramuscular and intraperitoneal challenges to fingerlings of rohu fish (5 ± 1gm weight) with FLB1 showed no pathogenicity and occurrence of disease symptoms in fishes over an observation period of 7 days. The results revealed FLB1 as a potential probiotic candidate for aquaculture application among other isolates.Keywords: aquaculture, Lactobacillus plantarum, Lactococcus lactis, probiotics
Procedia PDF Downloads 13782 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 27481 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population
Authors: Ye Xue, Zhenhua Deng
Abstract:
Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool
Procedia PDF Downloads 5980 Soil Composition in Different Agricultural Crops under Application of Swine Wastewater
Authors: Ana Paula Almeida Castaldelli Maciel, Gabriela Medeiros, Amanda de Souza Machado, Maria Clara Pilatti, Ralpho Rinaldo dos Reis, Silvio Cesar Sampaio
Abstract:
Sustainable agricultural systems are crucial to ensuring global food security and the long-term production of nutritious food. Comprehensive soil and water management practices, including nutrient management, balanced fertilizer use, and appropriate waste management, are essential for sustainable agriculture. Swine wastewater (SWW) treatment has become a significant focus due to environmental concerns related to heavy metals, antibiotics, resistant pathogens, and nutrients. In South America, small farms use soil to dispose of animal waste, a practice that is expected to increase with global pork production. The potential of SWW as a nutrient source is promising, contributing to global food security, nutrient cycling, and mineral fertilizer reduction. Short- and long-term studies evaluated the effects of SWW on soil and plant parameters, such as nutrients, heavy metals, organic matter (OM), cation exchange capacity (CEC), and pH. Although promising results have been observed in short- and medium-term applications, long-term applications require more attention due to heavy metal concentrations. Organic soil amendment strategies, due to their economic and ecological benefits, are commonly used to reduce the bioavailability of heavy metals. However, the rate of degradation and initial levels of OM must be monitored to avoid changes in soil pH and release of metals. The study aimed to evaluate the long-term effects of SWW application on soil fertility parameters, focusing on calcium (Ca), magnesium (Mg), and potassium (K), in addition to CEC and OM. Experiments were conducted at the Universidade Estadual do Oeste do Paraná, Brazil, using 24 drainage lysimeters for nine years, with different application rates of SWW and mineral fertilization. Principal Component Analysis (PCA) was then conducted to summarize the composite variables, known as principal components (PC), and limit the dimensionality to be evaluated. The retained PCs were then correlated with the original variables to identify the level of association between each variable and each PC. Data were interpreted using Analysis of Variance - ANOVA for general linear models (GLM). As OM was not measured in the 2007 soybean experiment, it was assessed separately from PCA to avoid loss of information. PCA and ANOVA indicated that crop type, SWW, and mineral fertilization significantly influenced soil nutrient levels. Soybeans presented higher concentrations of Ca, Mg, and CEC. The application of SWW influenced K levels, with higher concentrations observed in SWW from biodigesters and higher doses of swine manure. Variability in nutrient concentrations in SWW due to factors such as animal age and feed composition makes standard recommendations challenging. OM levels increased in SWW-treated soils, improving soil fertility and structure. In conclusion, the application of SWW can increase soil fertility and crop productivity, reducing environmental risks. However, careful management and long-term monitoring are essential to optimize benefits and minimize adverse effects.Keywords: contamination, water research, biodigester, nutrients
Procedia PDF Downloads 6079 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors
Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang
Abstract:
Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve
Procedia PDF Downloads 4078 Impact of Air Pressure and Outlet Temperature on Physicochemical and Functional Properties of Spray-dried Skim Milk Powder
Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit
Abstract:
Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder, to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed and the use of genetic algorithm will allow the optimization of powder functionalities.Keywords: dairy powders, spray-drying, powders functionalities, design of experiment
Procedia PDF Downloads 9377 In-Process Integration of Resistance-Based, Fiber Sensors during the Braiding Process for Strain Monitoring of Carbon Fiber Reinforced Composite Materials
Authors: Oscar Bareiro, Johannes Sackmann, Thomas Gries
Abstract:
Carbon fiber reinforced polymer composites (CFRP) are used in a wide variety of applications due to its advantageous properties and design versatility. The braiding process enables the manufacture of components with good toughness and fatigue strength. However, failure mechanisms of CFRPs are complex and still present challenges associated with their maintenance and repair. Within the broad scope of structural health monitoring (SHM), strain monitoring can be applied to composite materials to improve reliability, reduce maintenance costs and safely exhaust service life. Traditional SHM systems employ e.g. fiber optics, piezoelectrics as sensors, which are often expensive, time consuming and complicated to implement. A cost-efficient alternative can be the exploitation of the conductive properties of fiber-based sensors such as carbon, copper, or constantan - a copper-nickel alloy – that can be utilized as sensors within composite structures to achieve strain monitoring. This allows the structure to provide feedback via electrical signals to a user which are essential for evaluating the structural condition of the structure. This work presents a strategy for the in-process integration of resistance-based sensors (Elektrisola Feindraht AG, CuNi23Mn, Ø = 0.05 mm) into textile preforms during its manufacture via the braiding process (Herzog RF-64/120) to achieve strain monitoring of braided composites. For this, flat samples of instrumented composite laminates of carbon fibers (Toho Tenax HTS40 F13 24K, 1600 tex) and epoxy resin (Epikote RIMR 426) were manufactured via vacuum-assisted resin infusion. These flat samples were later cut out into test specimens and the integrated sensors were wired to the measurement equipment (National Instruments, VB-8012) for data acquisition during the execution of mechanical tests. Quasi-static tests were performed (tensile, 3-point bending tests) following standard protocols (DIN EN ISO 527-1 & 4, DIN EN ISO 14132); additionally, dynamic tensile tests were executed. These tests were executed to assess the sensor response under different loading conditions and to evaluate the influence of the sensor presence on the mechanical properties of the material. Several orientations of the sensor with regards to the applied loading and sensor placements inside the laminate were tested. Strain measurements from the integrated sensors were made by programming a data acquisition code (LabView) written for the measurement equipment. Strain measurements from the integrated sensors were then correlated to the strain/stress state for the tested samples. From the assessment of the sensor integration approach it can be concluded that it allows for a seamless sensor integration into the textile preform. No damage to the sensor or negative effect on its electrical properties was detected during inspection after integration. From the assessment of the mechanical tests of instrumented samples it can be concluded that the presence of the sensors does not alter significantly the mechanical properties of the material. It was found that there is a good correlation between resistance measurements from the integrated sensors and the applied strain. It can be concluded that the correlation is of sufficient accuracy to determinate the strain state of a composite laminate based solely on the resistance measurements from the integrated sensors.Keywords: braiding process, in-process sensor integration, instrumented composite material, resistance-based sensor, strain monitoring
Procedia PDF Downloads 10676 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 38875 Benzenepropanamine Analogues as Non-detergent Microbicidal Spermicide for Effective Pre-exposure Prophylaxis
Authors: Veenu Bala, Yashpal S. Chhonker, Bhavana Kushwaha, Rabi S. Bhatta, Gopal Gupta, Vishnu L. Sharma
Abstract:
According to UNAIDS 2013 estimate nearly 52% of all individuals living with HIV are now women of reproductive age (15–44 years). Seventy-five percent cases of HIV acquisition are through heterosexual contacts and sexually transmitted infections (STIs), attributable to unsafe sexual behaviour. Each year, an estimated 500 million people acquire atleast one of four STIs: chlamydia, gonorrhoea, syphilis and trichomoniasis. Trichomonas vaginalis (TV) is exclusively sexually transmitted in adults, accounting for 30% of STI cases and associated with pelvic inflammatory disease (PID), vaginitis and pregnancy complications in women. TV infection resulted in impaired vaginal milieu, eventually favoring HIV transmission. In the absence of an effective prophylactic HIV vaccine, prevention of new infections has become a priority. It was thought worthwhile to integrate HIV prevention and reproductive health services including unintended pregnancy protection for women as both are related with unprotected sex. Initially, nonoxynol-9 (N-9) had been proposed as a spermicidal agent with microbicidal activity but on the contrary it increased HIV susceptibility due to surfactant action. Thus, to accomplish an urgent need of novel woman controlled non-detergent microbicidal spermicides benzenepropanamine analogues have been synthesized. At first, five benzenepropanamine-dithiocarbamate hybrids have been synthesized and evaluated for their spermicidal, anti-Trichomonas and anti-fungal activities along with safety profiling to cervicovaginal cells. In order to further enhance the scope of above study benzenepropanamine was hybridized with thiourea as to introduce anti-HIV potential. The synthesized hybrid molecules were evaluated for their reverse transcriptase (RT) inhibition, spermicidal, anti-Trichomonas and antimicrobial activities as well as their safety against vaginal flora and cervical cells. simulated vaginal fluid (SVF) stability and pharmacokinetics of most potent compound versus N-9 was examined in female Newzealand (NZ) rabbits to observe its absorption into systemic circulation and subsequent exposure in blood plasma through vaginal wall. The study resulted in the most promising compound N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) exhibiting better activity profile than N-9 as it showed RT inhibition (72.30 %), anti-Trichomonas (MIC, 46.72 µM against MTZ susceptible and MIC, 187.68 µM against resistant strain), spermicidal (MEC, 0.01%) and antifungal activity (MIC, 3.12–50 µg/mL) against four fungal strains. The high safety against vaginal epithelium (HeLa cells) and compatibility with vaginal flora (lactobacillus), SVF stability and least vaginal absorption supported its suitability for topical vaginal application. Docking study was performed to gain an insight into the binding mode and interactions of the most promising compound, N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) with HIV-1 Reverse Transcriptase. The docking study has revealed that compound (29) interacted with HIV-1 RT similar to standard drug Nevirapine. It may be concluded that hybridization of benzenepropanamine and thiourea moiety resulted into novel lead with multiple activities including RT inhibition. A further lead optimization may result into effective vaginal microbicides having spermicidal, anti-Trichomonas, antifungal and anti-HIV potential altogether with enhanced safety to cervico-vaginal cells in comparison to Nonoxynol-9.Keywords: microbicidal, nonoxynol-9, reverse transcriptase, spermicide
Procedia PDF Downloads 34574 An Integrated Solid Waste Management Strategy for Semi-Urban and Rural Areas of Pakistan
Authors: Z. Zaman Asam, M. Ajmal, R. Saeed, H. Miraj, M. Muhammad Ahtisham, B. Hameed, A. -Sattar Nizami
Abstract:
In Pakistan, environmental degradation and consequent human health deterioration has rapidly accelerated in the past decade due to solid waste mismanagement. As the situation worsens with time, establishment of proper waste management practices is urgently needed especially in semi urban and rural areas of Pakistan. This study uses a concept of Waste Bank, which involves a transfer station for collection of sorted waste fractions and its delivery to the targeted market such as recycling industries, biogas plants, composting facilities etc. The management efficiency and effectiveness of Waste Bank depend strongly on the proficient sorting and collection of solid waste fractions at household level. However, the social attitude towards such a solution in semi urban/rural areas of Pakistan demands certain prerequisites to make it workable. Considering these factors the objectives of this study are to: [A] Obtain reliable data about quantity and characteristics of generated waste to define feasibility of business and design factors, such as required storage area, retention time, transportation frequency of the system etc. [B] Analyze the effects of various social factors on waste generation to foresee future projections. [C] Quantify the improvement in waste sorting efficiency after awareness campaign. We selected Gujrat city of Central Punjab province of Pakistan as it is semi urban adjoined by rural areas. A total of 60 houses (20 from each of the three selected colonies), belonging to different social status were selected. Awareness sessions about waste segregation were given through brochures and individual lectures in each selected household. Sampling of waste, that households had attempted to sort, was then carried out in the three colored bags that were provided as part of the awareness campaign. Finally, refined waste sorting, weighing of various fractions and measurement of dry mass was performed in environmental laboratory using standard methods. It was calculated that sorting efficiency of waste improved from 0 to 52% as a result of the awareness campaign. The generation of waste (dry mass basis) on average from one household was 460 kg/year whereas per capita generation was 68 kg/year. Extrapolating these values for Gujrat Tehsil, the total waste generation per year is calculated to be 101921 tons dry mass (DM). Characteristics found in waste were (i) organic decomposable (29.2%, 29710 tons/year DM), (ii) recyclables (37.0%, 37726 tons/year DM) that included plastic, paper, metal and glass, and (iii) trash (33.8%, 34485 tons/year DM) that mainly comprised of polythene bags, medicine packaging, pampers and wrappers. Waste generation was more in colonies with comparatively higher income and better living standards. In future, data collection for all four seasons and improvements due to expansion of awareness campaign to educational institutes will be quantified. This waste management system can potentially fulfill vital sustainable development goals (e.g. clean water and sanitation), reduce the need to harvest fresh resources from the ecosystem, create business and job opportunities and consequently solve one of the most pressing environmental issues of the country.Keywords: integrated solid waste management, waste segregation, waste bank, community development
Procedia PDF Downloads 142