Search results for: Albert Y. Tong
21 Revealing the Nitrogen Reaction Pathway for the Catalytic Oxidative Denitrification of Fuels
Authors: Michael Huber, Maximilian J. Poller, Jens Tochtermann, Wolfgang Korth, Andreas Jess, Jakob Albert
Abstract:
Aside from the desulfurisation, the denitrogenation of fuels is of great importance to minimize the environmental impact of transport emissions. The oxidative reaction pathway of organic nitrogen in the catalytic oxidative denitrogenation could be successfully elucidated. This is the first time such a pathway could be traced in detail in non-microbial systems. It was found that the organic nitrogen is first oxidized to nitrate, which is subsequently reduced to molecular nitrogen via nitrous oxide. Hereby, the organic substrate serves as a reducing agent. The discovery of this pathway is an important milestone for the further development of fuel denitrogenation technologies. The United Nations aims to counteract global warming with Net Zero Emissions (NZE) commitments; however, it is not yet foreseeable when crude oil-based fuels will become obsolete. In 2021, more than 50 million barrels per day (mb/d) were consumed for the transport sector alone. Above all, heteroatoms such as sulfur or nitrogen produce SO₂ and NOx during combustion in the engines, which is not only harmful to the climate but also to health. Therefore, in refineries, these heteroatoms are removed by hy-drotreating to produce clean fuels. However, this catalytic reaction is inhibited by the basic, nitrogenous reactants (e.g., quinoline) as well as by NH3. The ion pair of the nitrogen atom forms strong pi-bonds to the active sites of the hydrotreating catalyst, which dimin-ishes its activity. To maximize the desulfurization and denitrogenation effectiveness in comparison to just extraction and adsorption, selective oxidation is typically combined with either extraction or selective adsorption. The selective oxidation produces more polar compounds that can be removed from the non-polar oil in a separate step. The extraction step can also be carried out in parallel to the oxidation reaction, as a result of in situ separation of the oxidation products (ECODS; extractive catalytic oxidative desulfurization). In this process, H8PV5Mo7O40 (HPA-5) is employed as a homogeneous polyoxometalate (POM) catalyst in an aqueous phase, whereas the sulfur containing fuel components are oxidized after diffusion from the organic fuel phase into the aqueous catalyst phase, to form highly polar products such as H₂SO₄ and carboxylic acids, which are thereby extracted from the organic fuel phase and accumulate in the aqueous phase. In contrast to the inhibiting properties of the basic nitrogen compounds in hydrotreating, the oxidative desulfurization improves with simultaneous denitrification in this system (ECODN; extractive catalytic oxidative denitrogenation). The reaction pathway of ECODS has already been well studied. In contrast, the oxidation of nitrogen compounds in ECODN is not yet well understood and requires more detailed investigations.Keywords: oxidative reaction pathway, denitrogenation of fuels, molecular catalysis, polyoxometalate
Procedia PDF Downloads 18020 The Audiovisual Media as a Metacritical Ludicity Gesture in the Musical-Performatic and Scenic Works of Caetano Veloso and David Bowie
Authors: Paulo Da Silva Quadros
Abstract:
This work aims to point out comparative parameters between the artistic production of two exponents of the contemporary popular culture scene: Caetano Veloso (Brazil) and David Bowie (England). Both Caetano Veloso and David Bowie were pioneers in establishing an aesthetic game between various artistic expressions at the service of the music-visual scene, that is, the conceptual interconnections between several forms of aesthetic processes, such as fine arts, theatre, cinema, poetry, and literature. There are also correlations in their expressive attitudes of art, especially regarding the dialogue between the fields of art and politics (concern with respect to human rights, human dignity, racial issues, tolerance, gender issues, and sexuality, among others); the constant tension and cunning game between market, free expression and critical sense; the sophisticated, playful mechanisms of metalanguage and aesthetic metacritique. Fact is that both of them almost came to cooperate with each other in the 1970s when Caetano was in exile in England, and when both had at the same time the same music producer, who tried to bring them closer, noticing similar aesthetic qualities in both artistic works, which was later glimpsed by some music critics. Among many of the most influential issues in Caetano's and Bowie's game of artistic-aesthetic expression are, for example, the ideas advocated by the sensation of strangeness (Albert Camus), art as transcendence (Friedrich Nietzsche), the deconstruction and reconstruction of auratic reconfiguration of artistic signs (Walter Benjamin and Andy Warhol). For deepen more theoretical issues, the following authors will be used as supportive interpretative references: Hans-Georg Gadamer, Immanuel Kant, Friedrich Schiller, Johan Huizinga. In addition to the aesthetic meanings of Ars Ludens characteristics of the two artists, the following supporting references will be also added: the question of technique (Martin Heidegger), the logic of sense (Gilles Deleuze), art as an event and the sense of the gesture of art ( Maria Teresa Cruz), the society of spectacle (Guy Debord), Verarbeitung and Durcharbeitung (Sigmund Freud), the poetics of interpretation and the sign of relation (Cremilda Medina). The purpose of such interpretative references is to seek to understand, from a cultural reading perspective (cultural semiology), some significant elements in the dynamics of aesthetic and media interconnections of both artists, which made them as some of the most influential interlocutors in contemporary music aesthetic thought, as a playful vivid experience of life and art.Keywords: Caetano Veloso, David Bowie, music aesthetics, symbolic playfulness, cultural reading
Procedia PDF Downloads 16819 Enhancing Students’ Academic Engagement in Mathematics through a “Concept+Language Mapping” Approach
Authors: Jodie Lee, Lorena Chan, Esther Tong
Abstract:
Hong Kong students face a unique learning environment. Starting from the 2010/2011 school year, The Education Bureau (EDB) of the Government of the Hong Kong Special Administrative Region implemented the fine-tuned Medium of Instruction (MOI) arrangements for secondary schools. Since then, secondary schools in Hong Kong have been given the flexibility to decide the most appropriate MOI arrangements for their schools and under the new academic structure for senior secondary education, particularly on the compulsory part of the mathematics curriculum. In 2019, Hong Kong Diploma of Secondary Education Examination (HKDSE), over 40% of school day candidates attempted the Mathematics Compulsory Part examination in the Chinese version while the rest took the English version. Moreover, only 14.38% of candidates sat for one of the extended Mathematics modules. This results in a serious of intricate issues to students’ learning in post-secondary education programmes. It is worth to note that when students further pursue to an higher education in Hong Kong or even oversea, they may facing substantial difficulties in transiting learning from learning mathematics in their mother tongue in Chinese-medium instruction (CMI) secondary schools to an English-medium learning environment. Some students understood the mathematics concepts were found to fail to fulfill the course requirements at college or university due to their learning experience in secondary study at CMI. They are particularly weak in comprehending the mathematics questions when they are doing their assessment or attempting the test/examination. A government funded project was conducted with the aims of providing integrated learning context and language support to students with a lower level of numeracy and/or with CMI learning experience. By introducing this “integrated concept + language mapping approach”, students can cope with the learning challenges in the compulsory English-medium mathematics and statistics subjects in their tertiary education. Ultimately, in the hope that students can enhance their mathematical ability, analytical skills, and numerical sense for their lifelong learning. The “Concept + Language Mapping “(CLM) approach was adopted and tried out in the bridging courses for students with a lower level of numeracy and/or with CMI learning experiences. At the beginning of each class, a pre-test was conducted, and class time was then devoted to introducing the concepts by CLM approach. For each concept, the key thematic items and their different semantic relations are presented using graphics and animations via the CLM approach. At the end of each class, a post-test was conducted. Quantitative data analysis was performed to study the effect on students’ learning via the CLM approach. Stakeholders' feedbacks were collected to estimate the effectiveness of the CLM approach in facilitating both content and language learning. The results based on both students’ and lecturers’ feedback indicated positive outcomes on adopting the CLM approach to enhance the mathematical ability and analytical skills of CMI students.Keywords: mathematics, Concept+Language Mapping, level of numeracy, medium of instruction
Procedia PDF Downloads 8118 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images
Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget
Abstract:
In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).Keywords: agricultural practices, remote sensing, rice, yield
Procedia PDF Downloads 27417 Study Protocol: Impact of a Sustained Health Promoting Workplace on Stock Price Performance and Beta - A Singapore Case
Authors: Wee Tong Liaw, Elaine Wong Yee Sing
Abstract:
Since 2001, many companies in Singapore have voluntarily participated in the bi-annual Singapore HEALTH Award initiated by the Health Promotion Board of Singapore (HPB). The Singapore HEALTH Award (SHA), is an industry wide award and assessment process. SHA assesses and recognizes employers in Singapore for implementing a comprehensive and sustainable health promotion programme at their workplaces. The rationale for implementing a sustained health promoting workplace and participating in SHA is obvious when company management is convinced that healthier employees, business productivity, and profitability are positively correlated. However, performing research or empirical studies on the impact of a sustained health promoting workplace on stock returns are not likely to yield any interests in the absence of a systematic and independent assessment on the comprehensiveness and sustainability of a health promoting workplace in most developed economies. The principles of diversification and mean-variance efficient portfolio in Modern Portfolio Theory developed by Markowitz (1952) laid the foundation for the works of many financial economists and researchers, and among others, the development of the Capital Asset Pricing Model from the work of Sharpe (1964), Lintner (1965) and Mossin (1966), and the Fama-French Three-Factor Model of Fama and French (1992). This research seeks to support the rationale by studying whether there is a significant relationship or impact of a sustained health promoting workplace on the performance of companies listed on the SGX. The research shall form and test hypotheses pertaining to the impact of a sustained health promoting workplace on company’s performances, including stock returns, of companies that participated in the SHA and companies that did not participate in the SHA. In doing so, the research would be able to determine whether corporate and fund manager should consider the significance of a sustained health promoting workplace as a risk factor to explain the stock returns of companies listed on the SGX. With respect to Singapore’s stock market, this research will test the significance and relevance of a health promoting workplace using the Singapore Health Award as a proxy for non-diversifiable risk factor to explain stock returns. This study will examine the significance of a health promoting workplace on a company’s performance and study its impact on stock price performance and beta and examine if it has higher explanatory power than the traditional single factor asset pricing model CAPM (Capital Asset Pricing Model). To study the significance there are three key questions pertinent to the research study. I) Given a choice, would an investor be better off investing in a listed company with a sustained health promoting workplace i.e. a Singapore Health Award’s recipient? II) The Singapore Health Award has four levels of award starting from Bronze, Silver, Gold to Platinum. Would an investor be indifferent to the level of award when investing in a listed company who is a Singapore Health Award’s recipient? III) Would an asset pricing model combining FAMA-French Three Factor Model and ‘Singapore Health Award’ factor be more accurate than single factor Capital Asset Pricing Model and the Three Factor Model itself?Keywords: asset pricing model, company's performance, stock prices, sustained health promoting workplace
Procedia PDF Downloads 36916 Structures and Analytical Crucibles in Nigerian Indigenous Art Music
Authors: Albert Oluwole Uzodimma Authority
Abstract:
Nigeria is a diverse nation with a rich cultural heritage that has produced numerous art musicians and a vast range of art songs. The compositional styles, tonal rhythm, text rhythm, word painting, and text-tone relationship vary extensively from one dialect to another, indicating the need for standardized tools for the structural and analytical deconstruction of Nigerian indigenous art music. The purpose of this research is to examine the structures of Nigerian indigenous art music and outline some crucibles for analyzing it, by investigating how dialectical inflection influences the choice of text tone, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. The research used a structured questionnaire to collect data from 50 musicologists, out of which 41 responded. The study's focus was on the works of two prominent twentieth-century composers, Stephen Olusoji, and Nwamara Alvan-Ikoku, titled "Oyigiyigi" and "O Chineke, Inozikwa omee," respectively. The data collected was presented in percentages using pie charts and tables. The study shows that in Nigerian Indigenous music, several aspects are to be considered for proper analysis, such as linguistic sensitivity, dialectical inflection influences text-tone relationship, text rhythm and tonal rhythm, which help to convey the proper meanings of messages in songs. It also highlights the lack of standardized rubrics for analysis, which necessitated the proposal of robust criteria for analyzing African music, known as Neo-Eclectic-Crucibles. Hinging on eclectic approach, this research makes significant contributions to music scholarship by addressing the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It provides a template for further studies leading to standardized rubrics for analyzing African music. This research collected data through a structured questionnaire and analyzed it using pie charts and tables to present the findings accurately. The analysis focused on the respondents' perspectives on the research objectives and structural analysis of two indigenous music compositions by Olusoji and Nwamara. This research answers the questions on the structures and analytical crucibles used in Nigerian indigenous art music, how dialectical inflection influences text-tone relationship, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. This paper demonstrates the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It highlights several aspects that are crucial to analyzing Nigerian indigenous music and proposes the Neo-Eclectic-Crucibles criteria for analyzing African music. The contribution of this research to music scholarship is significant, providing a template for further studies and research in the field.Keywords: art-music, crucibles, dialectical inflections, indigenous, text-tone, tonal rhythm, word-painting
Procedia PDF Downloads 10015 An Integrated Framework for Wind-Wave Study in Lakes
Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung
Abstract:
The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.Keywords: wave modelling, wind-wave, extreme value analysis, marina
Procedia PDF Downloads 8414 The Use of Remotely Sensed Data to Model Habitat Selections of Pileated Woodpeckers (Dryocopus pileatus) in Fragmented Landscapes
Authors: Ruijia Hu, Susanna T.Y. Tong
Abstract:
Light detection and ranging (LiDAR) and four-channel red, green, blue, and near-infrared (RGBI) remote sensed imageries allow an accurate quantification and contiguous measurement of vegetation characteristics and forest structures. This information facilitates the generation of habitat structure variables for forest species distribution modelling. However, applications of remote sensing data, especially the combination of structural and spectral information, to support evidence-based decisions in forest managements and conservation practices at local scale are not widely adopted. In this study, we examined the habitat requirements of pileated woodpecker (Dryocopus pileatus) (PW) in Hamilton County, Ohio, using ecologically relevant forest structural and vegetation characteristics derived from LiDAR and RGBI data. We hypothesized that the habitat of PW is shaped by vegetation characteristics that are directly associated with the availability of food, hiding and nesting resources, the spatial arrangement of habitat patches within home range, as well as proximity to water sources. We used 186 PW presence or absence locations to model their presence and absence in generalized additive model (GAM) at two scales, representing foraging and home range size, respectively. The results confirm PW’s preference for tall and large mature stands with structural complexity, typical of late-successional or old-growth forests. Besides, the crown size of dead trees shows a positive relationship with PW occurrence, therefore indicating the importance of declining living trees or early-stage dead trees within PW home range. These locations are preferred by PW for nest cavity excavation as it attempts to balance the ease of excavation and tree security. In addition, we found that PW can adjust its travel distance to the nearest water resource, suggesting that habitat fragmentation can have certain impacts on PW. Based on our findings, we recommend that forest managers should use different priorities to manage nesting, roosting, and feeding habitats. Particularly, when devising forest management and hazard tree removal plans, one needs to consider retaining enough cavity trees within high-quality PW habitat. By mapping PW habitat suitability for the study area, we highlight the importance of riparian corridor in facilitating PW to adjust to the fragmented urban landscape. Indeed, habitat improvement for PW in the study area could be achieved by conserving riparian corridors and promoting riparian forest succession along major rivers in Hamilton County.Keywords: deadwood detection, generalized additive model, individual tree crown delineation, LiDAR, pileated woodpecker, RGBI aerial imagery, species distribution models
Procedia PDF Downloads 5313 The Lived Experience of Pregnant Saudi Women Carrying a Fetus with Structural Abnormalities
Authors: Nasreen Abdulmannan
Abstract:
Fetal abnormalities are categorized as a structural abnormality, non-structural abnormality, or a combination of both. Fetal structural abnormalities (FSA) include, but are not limited, to Down syndrome, congenital diaphragmatic hernia, and cleft lip and palate. These abnormalities can be detected in the first weeks of pregnancy, which is almost around 9 - 20 weeks gestational. Etiological factors for FSA are unknown; however, transmitted genetic risk can be one of these factors. Consanguineous marriage often referred to as inbreeding, represents a significant risk factor for FSA due to the increased likelihood of deleterious genetic traits shared by both biological parents. In a country such as the Kingdom of Saudi Arabia (KSA), consanguineous marriage is high, which creates a significant risk of children being born with congenital abnormalities. Historically, the practice of consanguinity occurred commonly among European royalty. For example, Great Britain’s Queen Victoria married her German first cousin, Prince Albert of Coburg. Although a distant blood relationship, the United Kingdom’s Queen Elizabeth II married her cousin, Prince Philip of Greece and Denmark—both of them direct descendants of Queen Victoria. In Middle Eastern countries, a high incidence of consanguineous unions still exists, including in the KSA. Previous studies indicated that a significant gap exists in understanding the lived experiences of Saudi women dealing with an FSA-complicated pregnancy. Eleven participants were interviewed using a semi-structured interview format for this qualitative phenomenological study investigating the lived experiences of pregnant Saudi women carrying a child with FSA. This study explored the gaps in current literature regarding the lived experiences of pregnant Saudi women whose pregnancies were complicated by FSA. In addition, the researcher acquired knowledge about the available support and resources as well as the Saudi cultural perspective on FSA. This research explored the lived experiences of pregnant Saudi women utilizing Giorgi’s (2009) approach to data collection and data management. Findings for this study cover five major themes: (1) initial maternal reaction to the FSA diagnosis per ultrasound screening; (2) strengthening of the maternal relationship with God; (3) maternal concern for their child’s future; (4) feeling supported by their loved ones; and (5) lack of healthcare provider support and guidance. Future research in the KSA is needed to explore the network support for these mothers. This study recommended further clinical nursing research, nursing education, clinical practice, and healthcare policy/procedures to provide opportunities for improvement in nursing care and increase awareness in KSA society.Keywords: fetal structural abnormalities, psychological distress, health provider, health care
Procedia PDF Downloads 15512 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 4111 Exploring Drivers and Barriers to Environmental Supply Chain Management in the Pharmaceutical Industry of Ghana
Authors: Gifty Kumadey, Albert Tchey Agbenyegah
Abstract:
(i) Overview and research goal(s): This study aims to address research gaps in the Ghanaian pharmaceutical industry by examining the impact of environmental supply chain management (ESCM) practices on environmental and operational performance. Previous studies have provided inconclusive evidence on the relationship between ESCM practices and environmental and operational performance. The research aims to provide a clearer understanding of the impact of ESCM practices on environmental and operational performance in the context of the Ghanaian pharmaceutical industry. Limited research has been conducted on ESCM practices in developing countries, particularly in Africa. The study aims to bridge this gap by examining the drivers and barriers specific to the pharmaceutical industry in Ghana. The research aims to analyze the impact of ESCM practices on the achievement of Sustainable Development Goals (SDGs) in the Ghanaian pharmaceutical industry, focusing on SDGs 3, 12, 13, and 17. It also explores the potential for partnerships and collaborations to advance ESCM practices in the pharmaceutical industry. The research hypotheses suggest that pressure from stakeholder positively influences the adoption of ESCM practices in the Ghanaian pharmaceutical industry. By addressing these goals, the study aims to contribute to sustainable development initiatives and offer practical recommendations to enhance ESCM A practices in the industry. (ii) Research methods and data: This study uses a quantitative research design to examine the drivers and barriers to environmental supply chain management in the pharmaceutical industry in Accra.The sample size is approximately 150 employees, with senior and middle-level managers from pharmaceutical industry of Ghana. A purposive sampling technique is used to select participants with relevant knowledge and experience in environmental supply chain management. Data will be collected using a structured questionnaire using Likert scale responses. Descriptive statistics will be used to analyze the data and provide insights into current practices and their impact on environmental and operational performance. (iii) Preliminary results and conclusions: Main contributions: Identifying drivers/barriers to ESCM in Ghana's pharmaceutical industry, evaluating current ESCM practices, examining impact on performance, providing practical insights, contributing to knowledge on ESCM in Ghanaian context. The research contributes to SDGs 3, 9, and 12 by promoting sustainable practices and responsible consumption in the industry. The study found that government rules and regulations are the most critical drivers for ESCM adoption, with senior managers playing a significant role. However, employee and competitor pressures have a lesser impact. The industry has made progress in implementing certain ESCM practices, but there is room for improvement in areas like green distribution and reverse logistics. The study emphasizes the importance of government support, management engagement, and comprehensive implementation of ESCM practices in the industry. Future research should focus on overcoming barriers and challenges to effective ESCM implementation.Keywords: environmental supply chain, sustainable development goal, ghana pharmaceutical industry, government regulations
Procedia PDF Downloads 9410 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy
Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay
Abstract:
Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.Keywords: trauma, coagulopathy, prediction, model
Procedia PDF Downloads 1769 Making Sense of C. G. Jung’s Red Book and Black Books: Masonic Rites and Trauma
Authors: Lynn Brunet
Abstract:
In 2019 the author published a book-length study examining Jung’s Red Book. This study consisted of a close reading of each of the chapters in Liber Novus, focussing on the fantasies themselves and Jung’s accompanying paintings. It found that the plots, settings, characters and symbolism in each of these fantasies are not entirely original but remarkably similar to those found in some of the higher degrees of Continental Freemasonry. Jung was the grandson of his namesake, C.G. Jung (1794–1864), who was a Freemason and one-time Grand Master of the Swiss Masonic Lodge. The study found that the majority of Jung’s fantasies are very similar to those of the Ancient and Accepted Scottish Rite, practiced in Switzerland during the time of Jung’s childhood. It argues that the fantasies appear to be memories of a series of terrifying initiatory ordeals conducted using spurious versions of the Masonic rites. Spurious Freemasonry is a term that Masons use for the ‘irregular’ or illegitimate use of the rituals and are not sanctioned by the Order. Since the 1980s there have been multiple reports of ritual trauma amongst a wide variety of organizations, cults and religious groups that psychologists, counsellors, social workers, and forensic scientists have confirmed. The abusive use of Masonic rites features frequently in these reports. This initial study allows a reading of The Red Book that makes sense of the obscure references, bizarre scenarios and intense emotional trauma described by Jung throughout Liber Novus. It suggests that Jung appears to have undergone a cruel initiatory process as a child. The author is currently examining the extra material found in Jung’s Black Books and the results are confirming the original discoveries and demonstrating a number of aspects not covered in the first publication. These include the complex layering of ancient gods and belief systems in answer to Jung’s question, ‘In which underworld am I?’ It demonstrates that the majority of these ancient systems and their gods are discussed in a handbook for the Scottish Rite, Morals and Dogma by Albert Pike, but that the way they are presented by Philemon and his soul is intended to confuse him rather than clarify their purpose. This new study also examines Jung’s soul’s question ‘I am not a human being. What am I then?’ While further themes that emerge from the Black Books include his struggle with vanity and whether he should continue creating his ‘holy book’; and a comparison between Jung’s ‘mystery plays’ and examples from the Theatre of the Absurd. Overall, it demonstrates that Jung’s experience, while inexplicable in his own time, is now known to be the secret and abusive practice of initiation of the young found in a range of cults and religious groups in many first world countries. This paper will present a brief outline of the original study and then examine the themes that have emerged from the extra material found in the Black Books.Keywords: C. G. Jung, the red book, the black books, masonic themes, trauma and dissociation, initiation rites, secret societies
Procedia PDF Downloads 1348 Comparisons of Drop Jump and Countermovement Jump Performance for Male Basketball Players with and without Low-Dye Taping Application
Authors: Chung Yan Natalia Yeung, Man Kit Indy Ho, Kin Yu Stan Chan, Ho Pui Kipper Lam, Man Wah Genie Tong, Tze Chung Jim Luk
Abstract:
Excessive foot pronation is a well-known risk factor of knee and foot injuries such as patellofemoral pain, patellar and Achilles tendinopathy, and plantar fasciitis. Low-Dye taping (LDT) application is not uncommon for basketball players to control excessive foot pronation for pain control and injury prevention. The primary potential benefits of using LDT include providing additional supports to medial longitudinal arch and restricting the excessive midfoot and subtalar motion in weight-bearing activities such as running and landing. Meanwhile, restrictions provided by the rigid tape may also potentially limit functional joint movements and sports performance. Coaches and athletes need to weigh the potential benefits and harmful effects before making a decision if applying LDT technique is worthwhile or not. However, the influence of using LDT on basketball-related performance such as explosive and reactive strength is not well understood. Therefore, the purpose of this study was to investigate the change of drop jump (DJ) and countermovement jump (CMJ) performance before and after LDT application for collegiate male basketball players. In this within-subject crossover study, 12 healthy male basketball players (age: 21.7 ± 2.5 years) with at least 3-year regular basketball training experience were recruited. Navicular drop (ND) test was adopted as the screening and only those with excessive pronation (ND ≥ 10mm) were included. Participants with recent lower limb injury history were excluded. Recruited subjects were required to perform both ND, DJ (on a platform of 40cm height) and CMJ (without arms swing) tests in series during taped and non-taped conditions in the counterbalanced order. Reactive strength index (RSI) was calculated by using the flight time divided by the ground contact time measured. For DJ and CMJ tests, the best of three trials was used for analysis. The difference between taped and non-taped conditions for each test was further calculated through standardized effect ± 90% confidence intervals (CI) with clinical magnitude-based inference (MBI). Paired samples T-test showed significant decrease in ND (-4.68 ± 1.44mm; 95% CI: -3.77, -5.60; p < 0.05) while MBI demonstrated most likely beneficial and large effect (standardize effect: -1.59 ± 0.27) in LDT condition. For DJ test, significant increase in both flight time (25.25 ± 29.96ms; 95% CI: 6.22, 44.28; p < 0.05) and RSI (0.22 ± 0.22; 95% CI: 0.08, 0.36; p < 0.05) were observed. In taped condition, MBI showed very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.49) in flight time, possibly beneficial and small effect (standardized effect: -0.26 ± 0.29) in ground contact time and very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.42) in RSI. No significant difference in CMJ was observed (95% CI: -2.73, 2.08; p > 0.05). For basketball players with pes planus, applying LDT could substantially support the foot by elevating the navicular height and potentially provide acute beneficial effects in reactive strength performance. Meanwhile, no significant harmful effect on CMJ was observed. Basketball players may consider applying LDT before the game or training to enhance the reactive strength performance. However since the observed effects in this study could not generalize to other players without excessive foot pronation, further studies on players with normal foot arch or navicular height are recommended.Keywords: flight time, pes planus, pronated foot, reactive strength index
Procedia PDF Downloads 1557 Carbon-Foam Supported Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells
Authors: Albert Mufundirwa, Satoru Yoshioka, K. Ogi, Takeharu Sugiyama, George F. Harrington, Bretislav Smid, Benjamin Cunning, Kazunari Sasaki, Akari Hayashi, Stephen M. Lyth
Abstract:
Polymer electrolyte membrane fuel cells (PEMFCs) are electrochemical energy conversion devices used for portable, residential and vehicular applications due to their low emissions, high efficiency, and quick start-up characteristics. However, PEMFCs generally use expensive, Pt-based electrocatalysts as electrode catalysts. Due to the high cost and limited availability of platinum, research and development to either drastically reduce platinum loading, or replace platinum with alternative catalysts is of paramount importance. A combination of high surface area supports and nano-structured active sites is essential for effective operation of catalysts. We synthesize carbon foam supports by thermal decomposition of sodium ethoxide, using a template-free, gram scale, cheap, and scalable pyrolysis method. This carbon foam has a high surface area, highly porous, three-dimensional framework which is ideal for electrochemical applications. These carbon foams can have surface area larger than 2500 m²/g, and electron microscopy reveals that they have micron-scale cells, separated by few-layer graphene-like carbon walls. We applied this carbon foam as a platinum catalyst support, resulting in the improved electrochemical surface area and mass activity for the oxygen reduction reaction (ORR), compared to carbon black. Similarly, silver-decorated carbon foams showed higher activity and efficiency for electrochemical carbon dioxide conversion than silver-decorated carbon black. A promising alternative to Pt-catalysts for the ORR is iron-impregnated nitrogen-doped carbon catalysts (Fe-N-C). Doping carbon with nitrogen alters the chemical structure and modulates the electronic properties, allowing a degree of control over the catalytic properties. We have adapted our synthesis method to produce nitrogen-doped carbon foams with large surface area, using triethanolamine as a nitrogen feedstock, in a novel bottom-up protocol. These foams are then infiltrated with iron acetate (FeAc) and pyrolysed to form Fe-N-C foams. The resulting Fe-N-C foam catalysts have high initial activity (half-wave potential of 0.68 VRHE), comparable to that of commercially available Pt-free catalysts (e.g., NPC-2000, Pajarito Powder) in acid solution. In alkaline solution, the Fe-N-C carbon foam catalysts have a half-wave potential of 0.89 VRHE, which is higher than that of NPC-2000 by almost 10 mVRHE, and far out-performing platinum. However, the durability is still a problem at present. The lessons learned from X-ray absorption spectroscopy (XAS), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), and electrochemical measurements will be used to carefully design Fe-N-C catalysts for higher performance PEMFCs.Keywords: carbon-foam, polymer electrolyte membrane fuel cells, platinum, Pt-free, Fe-N-C, ORR
Procedia PDF Downloads 1806 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry
Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development
Procedia PDF Downloads 915 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument
Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki
Abstract:
According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test
Procedia PDF Downloads 3024 Ragging and Sludging Measurement in Membrane Bioreactors
Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd
Abstract:
Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.Keywords: clogging, membrane bioreactors, ragging, sludge
Procedia PDF Downloads 1783 Effects of Applying Low-Dye Taping in Performing Double-Leg Squat on Electromyographic Activity of Lower Extremity Muscles for Collegiate Basketball Players with Excessive Foot Pronation
Authors: I. M. K. Ho, S. K. Y. Chan, K. H. P. Lam, G. M. W. Tong, N. C. Y. Yeung, J. T. C. Luk
Abstract:
Low-dye taping (LDT) is commonly used for treating foot problems, such as plantar fasciitis, and supporting foot arch for runners and non-athletes patients with pes planus. The potential negative impact of pronated feet leading to tibial and femoral internal rotation via the entire kinetic chain reaction was postulated and identified. The changed lower limb biomechanics potentially leading to poor activation of hip and knee stabilizers, such as gluteus maximus and medius, may associate with higher risk of knee injuries including patellofemoral pain syndrome and ligamentous sprain in many team sports players. It is therefore speculated that foot arch correction with LDT might enhance the use of gluteal muscles. The purpose of this study was to investigate the effect of applying LDT on surface electromyographic (sEMG) activity of superior gluteus maximus (SGMax), inferior gluteus maximus (IGMax), gluteus medius (GMed) and tibialis anterior (TA) during double-leg squat. 12 male collegiate basketball players (age: 21.72.5 years; body fat: 12.43.6%; navicular drop: 13.72.7mm) with at least three years regular basketball training experience participated in this study. Participants were excluded if they had recent history of lower limb injuries, over 16.6% body fat and lesser than 10mm drop in navicular drop (ND) test. Recruited subjects visited the laboratory once for the within-subject crossover study. Maximum voluntary isometric contraction (MVIC) tests on all selected muscles were performed in randomized order followed by sEMG test on double-leg squat during LDT and non-LDT conditions in counterbalanced order. SGMax, IGMax, GMed and TA activities during the entire 2-second concentric and 2-second eccentric phases were normalized and interpreted as %MVIC. The magnitude of the difference between taped and non-taped conditions of each muscle was further assessed via standardized effect90% confidence intervals (CI) with non-clinical magnitude-based inference. Paired samples T-test showed a significant decrease (4.71.4mm) in ND (95% CI: 3.8, 5.6; p < 0.05) while no significant difference was observed between taped and non-taped conditions in sEMG tests for all muscles and contractions (p > 0.05). On top of traditional significant testing, magnitude-based inference showed possibly increase in IGMax activity (small standardized effect: 0.270.44), likely increase in GMed activity (small standardized effect: 0.340.34) and possibly increase in TA activity (small standardized effect: 0.220.29) during eccentric phase. It is speculated that the decrease of navicular drop supported by LDT application could potentially enhance the use of inferior gluteus maximus and gluteus medius especially during eccentric phase in this study. As the eccentric phase of double-leg squat is an important component of landing activities in basketball, further studies on the onset and amount of gluteal activation during jumping and landing activities with LDT are recommended. Since both hip and knee kinematics were not measured in this study, the underlying cause of the observed increase in gluteal activation during squat after LDT is inconclusive. In this regard, the investigation of relationships between LDT application, ND, hip and knee kinematics, and gluteal muscle activity during sports specific jumping and landing tasks should be focused in the future.Keywords: flat foot, gluteus maximus, gluteus medius, injury prevention
Procedia PDF Downloads 1562 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 1421 Reverse Logistics Network Optimization for E-Commerce
Authors: Albert W. K. Tan
Abstract:
This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.Keywords: reverse logistics, supply chain management, optimization, e-commerce
Procedia PDF Downloads 38