Search results for: near field communication
674 Implementation of Active Recovery at Immediate, 12 and 24 Hours Post-Training in Young Soccer Players
Authors: C. Villamizar, M. Serrato
Abstract:
In the pursuit of athletic performance, the role of physical training which is determined by a number of charges or taxes on physiological stress and musculoskeletal systems of the human body generated by the intensity and duration is fundamental. Given the physical demands of these activities both training and competitive must take into account the optimal relationship with a straining process recovery post favoring the process of overcompensation which aims to facilitate the return and rising energy potential and protein synthesis also of different tissues. Allowing muscle function returns to baseline or pre-exercise states. If this recovery process is not performed or is not allowed in a proper way, will result in an increased state of fatigue. Active recovery, is one of the strategies implemented in the sport for a return to pre-exercise physiological states. However, there are some adverse assumptions regarding the negative effects, as is the possibility of increasing the degradation of muscle glycogen and thus delaying the synthesis thereof. For them, it is necessary to investigate what would be the effects generated application made at different times after the effort. The aim of this study was to determine the effects of active recovery post effort made at three different times: immediately, at 12 and 24 hours on biochemical markers creatine kinase in youth soccer player’s categories. A randomized controlled trial with allocation to three groups was performed: A. active recovery immediately after the effort; B. active recovery performed at 12 hours after the effort; C. active recovery made at 24 hours after the effort. This study included 27 subjects belonging to a Colombian soccer team of the second division. Vital signs, weight, height, BMI, the percentage of muscle mass, fat mass percentage, personal medical history, and family were valued. The velocity, explosive force and Creatin Kinase (CK) in blood were tested before and after interventions. SAFT 90 protocol (Soccer Field specific Aerobic Test) was applied to participants for generating fatigue. CK samples were taken one hour before the application of the fatigue test, one hour after the fatigue protocol and 48 of the initial CK sample. Mean age was 18.5 ± 1.1 years old. Improvements in jumping and speed recovery the 3 groups (p < 0.05), but no statistically significant differences between groups was observed after recuperation. In all participants, there was a significant increment of CK when applied SAFT 90 in all the groups (median 103.1-111.1). The CK measurement after 48 hours reflects a recovery in all groups, however the group C, a decline below baseline levels of -55.5 (-96.3 /-20.4) which is a significant find. Other research has shown that CK does not return quickly to their baseline, but our study shows that active recovery favors the clearance of CK and also to perform recovery 24 hours after the effort generates higher clearance of this biomarker.Keywords: active recuperation, creatine phosphokinase, post training, young soccer players
Procedia PDF Downloads 159673 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics
Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer
Abstract:
Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS
Procedia PDF Downloads 344672 Becoming Vegan: The Theory of Planned Behavior and the Moderating Effect of Gender
Authors: Estela Díaz
Abstract:
This article aims to make three contributions. First, build on the literature on ethical decision-making literature by exploring factors that influence the intention of adopting veganism. Second, study the superiority of extended models of the Theory of Planned Behavior (TPB) for understanding the process involved in forming the intention of adopting veganism. Third, analyze the moderating effect of gender on TPB given that attitudes and behavior towards animals are gender-sensitive. No study, to our knowledge, has examined these questions. Veganism is not a diet but a political and moral stand that exclude, for moral reasons, the use of animals. Although there is a growing interest in studying veganism, it continues being overlooked in empirical research, especially within the domain of social psychology. TPB has been widely used to study a broad range of human behaviors, including moral issues. Nonetheless, TPB has rarely been applied to examine ethical decisions about animals and, even less, to veganism. Hence, the validity of TPB in predicting the intention of adopting veganism remains unanswered. A total of 476 non-vegan Spanish university students (55.6% female; the mean age was 23.26 years, SD= 6.1) responded to online and pencil-and-paper self-reported questionnaire based on previous studies. TPB extended models incorporated two background factors: ‘general attitudes towards humanlike-attributes ascribed to animals’ (AHA) (capacity for reason/emotions/suffer, moral consideration, and affect-towards-animals); and ‘general attitudes towards 11 uses of animals’ (AUA). SPSS 22 and SmartPLS 3.0 were used for statistical analyses. This study constructed a second-order reflective-formative model and took the multi-group analysis (MGA) approach to study gender effects. Six models of TPB (the standard and five competing) were tested. No a priori hypotheses were formulated. The results gave partial support to TPB. Attitudes (ATTV) (β = .207, p < .001), subjective norms (SNV) (β = .323, p < .001), and perceived control behavior (PCB) (β = .149, p < .001) had a significant direct effect on intentions (INTV). This model accounted for 27,9% of the variance in intention (R2Adj = .275) and had a small predictive relevance (Q2 = .261). However, findings from this study reveal that contrary to what TPB generally proposes, the effect of the background factors on intentions was not fully mediated by the proximal constructs of intentions. For instance, in the final model (Model#6), both factors had significant multiple indirect effect on INTV (β = .074, 95% C = .030, .126 [AHA:INTV]; β = .101, 95% C = .055, .155 [AUA:INTV]) and significant direct effect on INTV (β = .175, p < .001 [AHA:INTV]; β = .100, p = .003 [AUA:INTV]). Furthermore, the addition of direct paths from background factors to intentions improved the explained variance in intention (R2 = .324; R2Adj = .317) and the predictive relevance (Q2 = .300) over the base-model. This supports existing literature on the superiority of enhanced TPB models to predict ethical issues; which suggests that moral behavior may add additional complexity to decision-making. Regarding gender effect, MGA showed that gender only moderated the influence of AHA on ATTV (e.g., βWomen−βMen = .296, p < .001 [Model #6]). However, other observed gender differences (e.g. the explained variance of the model for intentions were always higher for men that for women, for instance, R2Women = .298; R2Men = .394 [Model #6]) deserve further considerations, especially for developing more effective communication strategies.Keywords: veganism, Theory of Planned Behavior, background factors, gender moderation
Procedia PDF Downloads 345671 Evaluating a Peer-To-Peer Health Education Program in Public Housing Communities during the COVID-19 Pandemic
Authors: Jane Oliver, Angeline Ferdinand, Jessica Kaufman, Peta Edler, Nicole Allard, Margie Danchin, Katherine B. Gibney
Abstract:
Background: The cohealth Health Concierge program operated in Melbourne, Australia, from July 2020 to 30 June 2022. The program was designed to provide place-based peer-to-peer COVID-19 education and support to culturally and linguistically diverse residents of high-rise public housing estates. During this time, the COVID-19 public health response changed frequently. We conducted a mixed-methods evaluation to determine the program’s impact on residents’ trust, engagement and communication with health services and public health activities. Methods: The RE-AIM model was used to assess program reach, effectiveness, adoption, implementation and maintenance and the evaluation was informed by a Project Reference Group including end-users. Data were collected between March and May 2022 in four estates where the program operated. We surveyed 301 residents, conducted qualitative interviews with 32 stakeholders and analyzed data from 20,901 forms reporting interactions between Health Concierges and residents collected from August 2021 to May 2022. These forms outlined the support provided by Health Concierges during each interaction. Results: Overall, the program was effective in guiding residents to testing and vaccination services and facilitating COVID-19 safe practices. Nearly two-thirds (191; 63.5%) of the 301 surveyed participants reported speaking with a Health Concierge in the previous six months, and some described having meaningful conversations with them. Despite this, many of the interactions residents described having with Health Concierges were superficial. When considering surveyed participants’ responses to the adapted Public Health Disaster Trust Scale, the mean score across all estates was 2.3 (or slightly more than ‘somewhat confident’) in public health authorities’ ability to respond to a localized infectious disease outbreak. While the program was valued during the rapidly changing public health response, many felt it had failed to evolve in the ‘living with COVID’ phase. Some residents expressed frustration with Health Concierges’ having perceived inactive, passive roles - although other residents felt Health Concierges were helpful and appreciated them. A perception that the true impact of Health Concierges’ work was underrecognized was widely voiced by health staff. All 20,901 Interaction Forms identified COVID-19-related supports provided to residents; almost all included provision of facemasks and/or hand sanitiser and 78% identified additional supports that were also provided, most frequently provision of other health information. Conclusions: The program disseminated up-to-date information to a diverse population within a rapidly changing public health setting. Health Concierges were able promote COVID-19-safe behaviours, including vaccine uptake, and link residents with support services. We recommend the program be revised and continued. New programs that draw on the Health Concierge model may be valuable in supporting future pandemic responses and should be considered in preparedness planning.Keywords: community health, COVID-19 pandemic, infectious diseases, public health, community health workers
Procedia PDF Downloads 98670 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure
Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati
Abstract:
The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure
Procedia PDF Downloads 340669 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 103668 Photoemission Momentum Microscopy of Graphene on Ir (111)
Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense
Abstract:
Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.Keywords: band structure, graphene, momentum microscopy, LDAD
Procedia PDF Downloads 339667 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance
Authors: Soheila Sadeghi
Abstract:
Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.Keywords: contract variation impact, quantitative analysis, project performance, claims analysis
Procedia PDF Downloads 39666 Destruction of Colon Cells by Nanocontainers of Ferromagnetic
Authors: Lukasz Szymanski, Zbigniew Kolacinski, Grzegorz Raniszewski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza, Karolina Przybylowska-Sygut, Ireneusz Majsterek, Zbigniew Kaminski, Justyna Fraczyk, Malgorzata Walczak, Beata Kolasinska, Adam Bednarek, Joanna Konka
Abstract:
The aim of this work is to investigate the influence of electromagnetic field from the range of radio frequencies on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon - ferromagnetic nanocontainers (FNCs) includes: The synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. Biochemical functionalization of ferromagnetic nanocontainers is necessary in order to increase the binding selectively with receptors presented on the surface of tumour cells. Multi-step modification procedure was finally used to attach folic acid on the surface of ferromagnetic nanocontainers. Pristine ferromagnetic carbon nanotubes are not suitable for application in medicine and biotechnology. Appropriate functionalization of ferromagnetic carbon nanotubes allows to receiving materials useful in medicine. Finally, a product contains folic acids on the surface of FNCs. The folic acid is a ligand of folate receptors – α which is overexpressed on the surface of epithelial tumours cells. It is expected that folic acids will be recognized and selectively bound by receptors presented on the surface of tumour cells. In our research, FNCs were covalently functionalized in a multi-step procedure. Ferromagnetic carbon nanotubes were oxidated using different oxidative agents. For this purpose, strong acids such as HNO3, or mixture HNO3 and H2SO4 were used. Reactive carbonyl and carboxyl groups were formed on the open sides and at the defects on the sidewalls of FNCs. These groups allow further modification of FNCs as a reaction of amidation, reaction of introduction appropriate linkers which separate solid surface of FNCs and ligand (folic acid). In our studies, amino acid and peptide have been applied as ligands. The last step of chemical modification was reaction-condensation with folic acid. In all reaction as coupling reagents were used derivatives of 1,3,5-triazine. The first trials in the device for hyperthermal RF generator have been done. The frequency of RF generator was in the ranges from 10 to 14Mhz and from 265 to 621kHz. Obtained functionalized nanoparticles enabled to reach the temperature of denaturation tumor cells in given frequencies.Keywords: cancer colon cells, carbon nanotubes, hyperthermia, ligands
Procedia PDF Downloads 312665 A Survey of Digital Health Companies: Opportunities and Business Model Challenges
Authors: Iris Xiaohong Quan
Abstract:
The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.Keywords: digital health, business models, entrepreneurship opportunities, healthcare
Procedia PDF Downloads 182664 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How
Authors: Rachel Barker
Abstract:
The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.Keywords: innovation, intellectual capital, knowledge sharing, performance measures
Procedia PDF Downloads 195663 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking
Authors: Noga Bregman
Abstract:
Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves
Procedia PDF Downloads 49662 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction
Authors: B. Godefroy, D. Beladjine, K. Beddiar
Abstract:
In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.Keywords: information system, interoperability, models for exploitation, property industry
Procedia PDF Downloads 144661 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans
Authors: Jian Wu
Abstract:
Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory
Procedia PDF Downloads 185660 Sustainable Production of Pharmaceutical Compounds Using Plant Cell Culture
Authors: David A. Ullisch, Yantree D. Sankar-Thomas, Stefan Wilke, Thomas Selge, Matthias Pump, Thomas Leibold, Kai Schütte, Gilbert Gorr
Abstract:
Plants have been considered as a source of natural substances for ages. Secondary metabolites from plants are utilized especially in medical applications but are more and more interesting as cosmetical ingredients and in the field of nutraceuticals. However, supply of compounds from natural harvest can be limited by numerous factors i.e. endangered species, low product content, climate impacts and cost intensive extraction. Especially in the pharmaceutical industry the ability to provide sufficient amounts of product and high quality are additional requirements which in some cases are difficult to fulfill by plant harvest. Whereas in many cases the complexity of secondary metabolites precludes chemical synthesis on a reasonable commercial basis, plant cells contain the biosynthetic pathway – a natural chemical factory – for a given compound. A promising approach for the sustainable production of natural products can be plant cell fermentation (PCF®). A thoroughly accomplished development process comprises the identification of a high producing cell line, optimization of growth and production conditions, the development of a robust and reliable production process and its scale-up. In order to address persistent, long lasting production, development of cryopreservation protocols and generation of working cell banks is another important requirement to be considered. So far the most prominent example using a PCF® process is the production of the anticancer compound paclitaxel. To demonstrate the power of plant suspension cultures here we present three case studies: 1) For more than 17 years Phyton produces paclitaxel at industrial scale i.e. up to 75,000 L in scale. With 60 g/kg dw this fully controlled process which is applied according to GMP results in outstanding high yields. 2) Thapsigargin is another anticancer compound which is currently isolated from seeds of Thapsia garganica. Thapsigargin is a powerful cytotoxin – a SERCA inhibitor – and the precursor for the derivative ADT, the key ingredient of the investigational prodrug Mipsagargin (G-202) which is in several clinical trials. Phyton successfully generated plant cell lines capable to express this compound. Here we present data about the screening for high producing cell lines. 3) The third case study covers ingenol-3-mebutate. This compound is found in the milky sap of the intact plants of the Euphorbiacae family at very low concentrations. Ingenol-3-mebutate is used in Picato® which is approved against actinic keratosis. Generation of cell lines expressing significant amounts of ingenol-3-mebutate is another example underlining the strength of plant cell culture. The authors gratefully acknowledge Inspyr Therapeutics for funding.Keywords: Ingenol-3-mebutate, plant cell culture, sustainability, thapsigargin
Procedia PDF Downloads 244659 Recent Advances in Research on Carotenoids: From Agrofood Production to Health Outcomes
Authors: Antonio J. Melendez-Martinez
Abstract:
Beyond their role as natural colorants, some carotenoids are provitamins A and may be involved in health-promoting biological actions and contribute to reducing the risk of developing non-communicable diseases, including several types of cancer, cardiovascular disease, eye conditions, skin disorders or metabolic disorders. Given the versatility of carotenoids, the COST-funded European network to advance carotenoid research and applications in agro-food and health (EUROCAROTEN) is aimed at promoting health through the diet and increasing well-being by means. Stakeholders from 38 countries participate in this network, and one of its main objectives is to promote research on little-studied carotenoids. In this contribution, recent advances of our research group and collaborators in the study of two such understudied carotenoids, namely phytoene and phytofluene, the colorless carotenoids, are outlined. The study of these carotenoids is important as they have been largely neglected despite they are present in our diets, fluids, and tissues, and evidence is accumulating that they may be involved in health-promoting actions. More specifically, studies on their levels in diverse tomato and orange varieties were carried out as well as on their potential bioavailability from different dietary sources. Furthermore, the potential effect of these carotenoids on an animal model subjected to oxidative stress was evaluated. The tomatoes were grown in research greenhouses, and some of them were subjected to regulated deficit irrigation, a sustainable agronomic practice. The citrus samples were obtained from an experimental field. The levels of carotenoids were assessed using HPLC according to routine methodologies followed in our lab. Regarding the potential bioavailability (bioaccessibility) studies, different products containing colorless carotenoids, like fruits, juices, were subjected to simulated in vitro digestions, and their incorporation into mixed micelles was assessed. The effect of the carotenoids on oxidative stress was evaluated on the Caenorhabditis elegans model. For that purpose, the worms were subjected to oxidative stress by means of a hydrogen peroxide challenge. In relation to the presence of colorless carotenoids in tomatoes and orange varieties, it was observed that they are widespread in such products and that there are mutants with very high quantities of them, for instance, the Cara Cara or Pinalate mutant oranges. The studies on their bioaccessibility revealed that, in general, phytoene and phytofluene are more bioaccessible than other common dietary carotenoids, probably due to their distinctive chemical structure. About the in vivo antioxidant capacity of phytoene and phytofluene, it was observed that they both exerted antioxidant effects at certain doses. In conclusion, evidence on the importance of phytoene and phytofluene as dietary easily bioavailable and antioxidant carotenoids has been obtained in recent studies from our group, which can be important shortly to innovate in health-promotion through the development of functional foods and related products.Keywords: carotenoids, health, functional foods, nutrition, phytoene, phytofluene
Procedia PDF Downloads 101658 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 87657 Let’s Work It Out: Effects of a Cooperative Learning Approach on EFL Students’ Motivation and Reading Comprehension
Authors: Shiao-Wei Chu
Abstract:
In order to enhance the ability of their graduates to compete in an increasingly globalized economy, the majority of universities in Taiwan require students to pass Freshman English in order to earn a bachelor's degree. However, many college students show low motivation in English class for several important reasons, including exam-oriented lessons, unengaging classroom activities, a lack of opportunities to use English in authentic contexts, and low levels of confidence in using English. Students’ lack of motivation in English classes is evidenced when students doze off, work on assignments from other classes, or use their phones to chat with others, play video games or watch online shows. Cooperative learning aims to address these problems by encouraging language learners to use the target language to share individual experiences, cooperatively complete tasks, and to build a supportive classroom learning community whereby students take responsibility for one another’s learning. This study includes approximately 50 student participants in a low-proficiency Freshman English class. Each week, participants will work together in groups of between 3 and 4 students to complete various in-class interactive tasks. The instructor will employ a reward system that incentivizes students to be responsible for their own as well as their group mates’ learning. The rewards will be based on points that team members earn through formal assessment scores as well as assessment of their participation in weekly in-class discussions. The instructor will record each team’s week-by-week improvement. Once a team meets or exceeds its own earlier performance, the team’s members will each receive a reward from the instructor. This cooperative learning approach aims to stimulate EFL freshmen’s learning motivation by creating a supportive, low-pressure learning environment that is meant to build learners’ self-confidence. Students will practice all four language skills; however, the present study focuses primarily on the learners’ reading comprehension. Data sources include in-class discussion notes, instructor field notes, one-on-one interviews, students’ midterm and final written reflections, and reading scores. Triangulation is used to determine themes and concerns, and an instructor-colleague analyzes the qualitative data to build interrater reliability. Findings are presented through the researcher’s detailed description. The instructor-researcher has developed this approach in the classroom over several terms, and its apparent success at motivating students inspires this research. The aims of this study are twofold: first, to examine the possible benefits of this cooperative approach in terms of students’ learning outcomes; and second, to help other educators to adapt a more cooperative approach to their classrooms.Keywords: freshman English, cooperative language learning, EFL learners, learning motivation, zone of proximal development
Procedia PDF Downloads 144656 Multi-Objective Optimization of Assembly Manufacturing Factory Setups
Authors: Andreas Lind, Aitor Iriondo Pascual, Dan Hogberg, Lars Hanson
Abstract:
Factory setup lifecycles are most often described and prepared in CAD environments; the preparation is based on experience and inputs from several cross-disciplinary processes. Early in the factory setup preparation, a so-called block layout is created. The intention is to describe a high-level view of the intended factory setup and to claim area reservations and allocations. Factory areas are then blocked, i.e., targeted to be used for specific intended resources and processes, later redefined with detailed factory setup layouts. Each detailed layout is based on the block layout and inputs from cross-disciplinary preparation processes, such as manufacturing sequence, productivity, workers’ workplace requirements, and resource setup preparation. However, this activity is often not carried out with all variables considered simultaneously, which might entail a risk of sub-optimizing the detailed layout based on manual decisions. Therefore, this work aims to realize a digital method for assembly manufacturing layout planning where productivity, area utilization, and ergonomics can be considered simultaneously in a cross-disciplinary manner. The purpose of the digital method is to support engineers in finding optimized designs of detailed layouts for assembly manufacturing factories, thereby facilitating better decisions regarding setups of future factories. Input datasets are company-specific descriptions of required dimensions for specific area reservations, such as defined dimensions of a worker’s workplace, material façades, aisles, and the sequence to realize the product assembly manufacturing process. To test and iteratively develop the digital method, a demonstrator has been developed with an adaptation of existing software that simulates and proposes optimized designs of detailed layouts. Since the method is to consider productivity, ergonomics, area utilization, and constraints from the automatically generated block layout, a multi-objective optimization approach is utilized. In the demonstrator, the input data are sent to the simulation software industrial path solutions (IPS). Based on the input and Lua scripts, the IPS software generates a block layout in compliance with the company’s defined dimensions of area reservations. Communication is then established between the IPS and the software EPP (Ergonomics in Productivity Platform), including intended resource descriptions, assembly manufacturing process, and manikin (digital human) resources. Using multi-objective optimization approaches, the EPP software then calculates layout proposals that are sent iteratively and simulated and rendered in IPS, following the rules and regulations defined in the block layout as well as productivity and ergonomics constraints and objectives. The software demonstrator is promising. The software can handle several parameters to optimize the detailed layout simultaneously and can put forward several proposals. It can optimize multiple parameters or weight the parameters to fine-tune the optimal result of the detailed layout. The intention of the demonstrator is to make the preparation between cross-disciplinary silos transparent and achieve a common preparation of the assembly manufacturing factory setup, thereby facilitating better decisions.Keywords: factory setup, multi-objective, optimization, simulation
Procedia PDF Downloads 147655 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities
Authors: Shaurya Chauhan, Sagar Gupta
Abstract:
Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.Keywords: open source, public participation, urbanization, urban development
Procedia PDF Downloads 149654 Case Report on Anaesthesia for Ruptured Ectopic with Severe Pulmonary Hypertension in a Mute Patient
Authors: Pamela Chia, Tay Yoong Chuan
Abstract:
Introduction: Severe pulmonary hypertension (PH) patients requiring non-cardiac surgery risk have increased mortality rates ranging. These patients are plagued with cardiorespiratory failure, dysrhythmias and anticoagulation potentially with concurrent sepsis and renal insufficiency, perioperative morbidity. We present a deaf-mute patient with severe idiopathic PH emergently prepared for ruptured ectopic laparotomy. Case Report: A 20 year-old female, 62kg (BMI 25 kg/m2) with severe idiopathic PH (2DE Ejection Fraction was 41%, Pulmonary Artery Systolic Pressure (PASP) 105 mmHg, Right ventricle strain and hypertrophy) and selective mutism was rushed in for emergency laparotomy after presenting to the emergency department for abdominal pain. The patient had an NYHA Class II with room air SpO2 93-95%. While awaiting lung transplant, the patient takes warfarin, Sildanefil, Macitentan and even Selexipag for rising PASP. At presentation, vital signs: BP 95/63, HR 119 SpO2 88% (room air). Despite decreasing haemoglobin 14 to 10g/dL, INR 2.59 was reversed with prothrombin concentrate, and Vitamin K. ECG revealed Right Bundle Branch Block with right ventricular strain and x-ray showed cardiomegaly, dilated Right Ventricle, Pulmonary Arteries, basal atelectasis. Arterial blood gas showed compensated metabolic acidosis pH 7.4 pCO2 32 pO2 53 HCO3 20 BE -4 SaO2 88%. The cardiothoracic surgeon concluded no role for Extracorporeal Membrane Oxygenation (ECMO). We inserted invasive arterial and central venous lines with blood transfusion via an 18G cannula before the patient underwent a midline laparotomy, haemostasis of ruptured ovarian cyst with 2.4L of clots under general anesthesia and FloTrac cardiac output monitoring. Rapid sequence induction was done with Midazolam/Propofol, remifentanil infusion, and rocuronium. The patient was maintained on Desflurane. Blood products and colloids were transfused for further 1.5L blood loss. Postoperatively, the patient was transferred to the intensive care unit and was extubated uneventfully 7hours later. The patient went home a week later. Discussion: Emergency hemostasis laparotomy in anticoagulated WHO Class I PH patient awaiting lung transplant with no ECMO backup poses tremendous stress on the deaf-mute patient and the anesthesiologist. Balancing hemodynamics avoiding hypotension while awaiting hemostasis in the presence of pulmonary arterial dilators and anticoagulation requires close titration of volatiles, which decreases RV contractility. We review the contraindicated anesthetic agents (ketamine, N2O), choice of vasopressors in hypotension to maintain Aortic-right ventricular pressure gradients and nitric oxide use perioperatively. Conclusion: Interdisciplinary communication with a deaf-mute moribund patient and anesthesia considerations pose many rare challenges worth sharing.Keywords: pulmonary hypertension, case report, warfarin reversal, emergency surgery
Procedia PDF Downloads 218653 Effects of Sulphide Mining on AISI 304 Stainless Steel
Authors: Aguasanta Miguel Sarmiento, José Miguel Dávila, María Luisa de la Torre
Abstract:
Acid mine drainage (AMD) is an acidic leachate with high levels of metals and sulphates in solution, which seriously affects the durability and strength of metallic materials used in the construction of structural and mechanical components. This paper presents the results of the evolution over time of the reduction in tensile strength and defects in AISI 304 stainless steel in contact with acid mine drainage. For this purpose, a total of 30 bars with a diameter of 8 mm and a length of 14 cm were placed transversely in the course of a stream contaminated by AMD from the sulphide mines of the Iberian Pyritic Belt (SW Spain). This stream has average pH values of 2.6, a potential of 660 mV, and average concentrations of 12 g/L of sulphates, 1.2 g/L of Fe, 191 mg/L of Zn, etc. Every two months of exposure, 6 stainless steel bars were extracted from the acid stream. They were subjected to surface roughness analysis carried out with the help of Mitutoyo Surftest SJ-210 surface roughness tester. The analysis was carried out at three different points on 5 specimens from each series. The average reading of each parameter is calculated in order to ensure the accuracy of the measurements and the surface coverage. Arithmetic mean roughness value (Ra), mean roughness depth (Rz), and root mean square roughness (Rq) were measured. Five specimens from each series were statically tensile tested using universal equipment (Servosis ME 403 of 200kN). The specimens were clamped at their ends with two grips for cylindrical sections, and the tensile force was applied at a constant speed of 0.5 kN/s, according to the requirements of standard UNE-EN ISO 6892-1: 2020. To determine the modulus of elasticity, limits close to 15% and 55% of the maximum load were used, depending on the course of each test. Field Emission Scanning Electron Microscopy (FESEM) was used to observe corrosion products and defects generated by exposure to AMD. Energy dispersive X-ray spectrometry (EDS) was used to analyse the chemical composition of the corrosion products formed. For this purpose, small pieces were cut from the resulting specimens, cleaned, and embedded in epoxy resin. The results show that after only 5 months of exposure of AISI 304 stainless steel to the mining environment, the surface roughness increases significantly, with average depths almost 6 times greater than the initial one. Cracks are observed on the surface of the material, which increases in size with the time of exposure. A large number of grains with a composition of more than 57% Pb and 16% Sn can be observed inside these cracks. Tensile tests show a reduction in the resistance of this material after only two months of exposure. The results show the serious problems that would result from the use of this material for the use of mechanical components in a sulphide mining environment, not only because of the significant reduction in the lifetime of such components, but also because of the implications for human safety.Keywords: acid mine drainage, corrosion, mechanical properties, stainless steel
Procedia PDF Downloads 14652 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 356651 The Expansion of Buddhism from India to Nepal Himalaya and Beyond
Authors: Umesh Regmi
Abstract:
This paper explores the expansion of Buddhism from India geographically to the Himalayan region of Nepal, Tibet, India, and Bhutan in chronological historical sequence. The Buddhism practiced in Tibet is the spread of the Mahayana-Vajrayana form appropriately designed by Indian Mahasiddhas, who were the practitioners of the highest form of tantra and meditation. Vajrayana Buddhism roots in the esoteric practices incorporating the teachings of Buddha, mantras, dharanis, rituals, and sadhana for attaining enlightenment. This form of Buddhism spread from India to Nepal after the 5th Century AD and Tibet after the 7th century AD and made a return journey to the Himalayan region of Nepal, India, and Bhutan after the 8th century. The first diffusion of this form of Buddhism from India to Nepal and Tibet is partially proven through Buddhist texts and the archaeological existence of monasteries historically and at times relied on mythological traditions. The second diffusion of Buddhism in Tibet was institutionalized through the textual translations and interpretations of Indian Buddhist masters and their Tibetan disciples and the establishment of different monasteries in various parts of Tibet, later resulting in different schools and their traditions: Nyingma, Kagyu, Sakya, Gelug, and their sub-schools. The first return journey of Buddhism from Tibet to the Himalayan region of Nepal, India, and Bhutan in the 8th century is mythologically recorded in local legends of the arrival of Padmasambhava, and the second journey in the 11th century and afterward flourished by many Indian masters who practiced continuously till date. This return journey of Tibetan Buddhism has been intensified after 1959 with the Chinese occupation of Tibet, resulting in the Tibetan Buddhist masters living in exile in major locations like Kathmandu, Dharmasala, Dehradun, Sikkim, Kalimpong, and beyond. The historic-cultural-critical methodology for the recognition of the qualities of cultural expressions analysis presents the Buddhist practices of the Himalayan region, explaining the concepts of Ri (mountain as spiritual symbols), yul-lha (village deities), dhar-lha (spiritual concept of mountain passes), dharchhog-lungdhar (prayer flags), rig-sum gonpo (small stupas), Chenresig, asura (demi gods), etc. Tibetan Buddhist history has preserved important textual and practical aspects of Vajrayana from Buddhism historically in the form of arrival, advent, and development, including rising and fall. Currently, Tibetan Buddhism has influenced a great deal in the contemporary Buddhist practices of the world. The exploratory findings conducted over seven years of field visits and research in the Himalayan regions of Nepal, India, and Bhutan have demonstrated the fact that Buddhism in the Himalayan region is a return journey from Tibet and lately been popularized globally after 1959 by major monasteries and their Buddhist masters, lamas, nuns and other professionals, who have contributed in different periods of time.Keywords: Buddhism, expansion, Himalayan region, India, Nepal, Bhutan, return, Tibet, Vajrayana Buddhism
Procedia PDF Downloads 107650 Synthesis and Characterisations of Cordierite Bonded Porous SiC Ceramics by Sol Infiltration Technique
Authors: Sanchita Baitalik, Nijhuma Kayal, Omprakash Chakrabarti
Abstract:
Recently SiC ceramics have been a focus of interest in the field of porous materials due to their unique combination of properties and hence they are considered as an ideal candidate for catalyst supports, thermal insulators, high-temperature structural materials, hot gas particulate separation systems etc. in different industrial processes. Several processing methods are followed for fabrication of porous SiC at low temperatures but all these methods are associated with several disadvantages. Therefore processing of porous SiC ceramics at low temperatures is still challenging. Concerning that of incorporation of secondary bond phase additives by an infiltration technique should result in a homogenous distribution of bond phase in the final ceramics. Present work is aimed to synthesis cordierite (2MgO.2Al2O3.5SiO2) bonded porous SiC ceramics following incorporation of sol-gel bond phase precursor into powder compacts of SiC and heat treating the infiltrated body at 1400 °C. In this paper the primary aim was to study the effect of infiltration of a precursor sol of cordierite into a porous SiC powder compact prepared with pore former of different particle sizes on the porosity, pore size, microstructure and the mechanical properties of the porous SiC ceramics. Cordierite sol was prepared by mixing a solution of magnesium nitrate hexahydrate and aluminium nitrate nonahydrate in 2:4 molar ratio in ethanol another solution containing tetra-ethyl orthosilicate and ethanol in 1:3 molar ratio followed by stirring for several hours. Powders of SiC (α-SiC; d50 =22.5 μm) and 10 wt. % polymer microbead of two sizes 8 and 50µm as the pore former were mixed in a suitable liquid medium, dried and pressed in the form of bars (50×20×16 mm3) at 23 MPa pressure. The well-dried bars were heat treated at 1100° C for 4 h with a hold at 750 °C for 2 h to remove the pore former. Bars were evacuated for 2 hr upto 0.3 mm Hg pressure into a vacuum chamber and infiltrated with cordierite precursor sol. The infiltrated samples were dried and the infiltration process was repeated until the weight gain became constant. Finally the infiltrated samples were sintered at 1400 °C to prepare cordierite bonded porous SiC ceramics. Porous ceramics prepared with 8 and 50 µm sized microbead exhibited lower oxidation degrees of respectively 7.8 and 4.8 % than the sample (23 %) prepared with no microbead. Depending on the size of pore former, the porosity of the final ceramic varied in the range of 36 to 40 vol. % with a variation of flexural strength from 33.7 to 24.6 MPa. XRD analysis showed major crystalline phases of the ceramics as SiC, SiO2 and cordierite. Two forms of cordierite, α-(hexagonal) and µ-(cubic), were detected by the XRD analysis. The SiC particles were observed to be bonded both by cristobalite with fish scale morphology and cordierite with rod shape morphology and thereby formed a porous network. The material and mechanical properties of cordierite bonded porous SiC ceramics are good in agreement to carry out further studies like thermal shock, corrosion resistance etc.Keywords: cordierite, infiltration technique, porous ceramics, sol-gel
Procedia PDF Downloads 270649 Evaluation of Cyclic Steam Injection in Multi-Layered Heterogeneous Reservoir
Authors: Worawanna Panyakotkaew, Falan Srisuriyachai
Abstract:
Cyclic steam injection (CSI) is a thermal recovery technique performed by injecting periodically heated steam into heavy oil reservoir. Oil viscosity is substantially reduced by means of heat transferred from steam. Together with gas pressurization, oil recovery is greatly improved. Nevertheless, prediction of effectiveness of the process is difficult when reservoir contains degree of heterogeneity. Therefore, study of heterogeneity together with interest reservoir properties must be evaluated prior to field implementation. In this study, thermal reservoir simulation program is utilized. Reservoir model is firstly constructed as multi-layered with coarsening upward sequence. The highest permeability is located on top layer with descending of permeability values in lower layers. Steam is injected from two wells located diagonally in quarter five-spot pattern. Heavy oil is produced by adjusting operating parameters including soaking period and steam quality. After selecting the best conditions for both parameters yielding the highest oil recovery, effects of degree of heterogeneity (represented by Lorenz coefficient), vertical permeability and permeability sequence are evaluated. Surprisingly, simulation results show that reservoir heterogeneity yields benefits on CSI technique. Increasing of reservoir heterogeneity impoverishes permeability distribution. High permeability contrast results in steam intruding in upper layers. Once temperature is cool down during back flow period, condense water percolates downward, resulting in high oil saturation on top layers. Gas saturation appears on top after while, causing better propagation of steam in the following cycle due to high compressibility of gas. Large steam chamber therefore covers most of the area in upper zone. Oil recovery reaches approximately 60% which is of about 20% higher than case of heterogeneous reservoir. Vertical permeability exhibits benefits on CSI. Expansion of steam chamber occurs within shorter time from upper to lower zone. For fining upward permeability sequence where permeability values are reversed from the previous case, steam does not override to top layers due to low permeability. Propagation of steam chamber occurs in middle of reservoir where permeability is high enough. Rate of oil recovery is slower compared to coarsening upward case due to lower permeability at the location where propagation of steam chamber occurs. Even CSI technique produces oil quite slowly in early cycles, once steam chamber is formed deep in the reservoir, heat is delivered to formation quickly in latter cycles. Since reservoir heterogeneity is unavoidable, a thorough understanding of its effect must be considered. This study shows that CSI technique might be one of the compatible solutions for highly heterogeneous reservoir. This competitive technique also shows benefit in terms of heat consumption as steam is injected periodically.Keywords: cyclic steam injection, heterogeneity, reservoir simulation, thermal recovery
Procedia PDF Downloads 457648 Experiment on Artificial Recharge of Groundwater Implemented Project: Effect on the Infiltration Velocity by Vegetation Mulch
Authors: Cheh-Shyh Ting, Jiin-Liang Lin
Abstract:
This study was conducted at the Wanglung Farm in Pingtung County to test the groundwater seepage influences on the implemented project for artificial groundwater recharge. The study was divided into three phases. The first phase, conducted on natural groundwater that was recharged through the local climate and growing conditions, observed the natural form of vegetation species. The original plants were flooded, and after 60 days it was observed that of the original plants only Goosegrass (Eleusine indica) and Black heart (Polygonum lapathifolium Linn.) remained. Direct infiltration tests were carried out, and calculations for the effect of vegetation on infiltration velocity of the recharge pool were noted. The second phase was an indoor test. Bahia grass and wild amaranth were selected as vegetation roots. After growth, the distribution of different grassroots was observed in order to facilitate a comparison permeability coefficient calculated by the amount of penetration and to explore the relationship between density and the efficiency to groundwater recharge. The third phase was the root tomography analysis, further observation of the development of plant roots using computed tomography technology. Computed Tomography, also known as (CT), is a diagnostic imaging examination, normally used in the medical field. In the first phase of the feasibility study, most non-aquatic plants wilted and died within seven days. In seven days, the remaining plants were used for experimental infiltration analysis. Results showed that in eight hours of infiltration test, Eleusine indica stems averaged 0.466 m/day and wild amaranth averaged 0.014 m/day. The second phase of the experiment was conducted on the remains of the plant a week in it had died and rotted, and the infiltration experiment was performed under these conditions. The results showed eight hours in end of the infiltration test, Eleusine indica stems averaged 0.033 m/day, and wild amaranth averaged 0.098 m/day. Non-aquatic plants died within two weeks, and their rotted remains clogged the pores of bottom soil particles, causing obstruction of recharge pool infiltration. Experiment results showed that eight hours in the test the average infiltration velocity for Eleusine indica stems was 0.0229 m/day and wild amaranth averaged 0.0117 m/day. Since the rotted roots of the plants blocked the pores of the soil in the recharge pool, which resulted in the obstruction of the artificial infiltration pond and showed an immediate impact on recharge efficiency. In order to observe the development of plant roots, the third phase used computed tomography imaging. Iodine developer was injected into the Black heart, allowing its cross-sectional images to be shown on CT and to be used to observe root development.Keywords: artificial recharge of groundwater, computed tomography, infiltration velocity, vegetation root system
Procedia PDF Downloads 308647 Dynamic Exergy Analysis for the Built Environment: Fixed or Variable Reference State
Authors: Valentina Bonetti
Abstract:
Exergy analysis successfully helps optimizing processes in various sectors. In the built environment, a second-law approach can enhance potential interactions between constructions and their surrounding environment and minimise fossil fuel requirements. Despite the research done in this field in the last decades, practical applications are hard to encounter, and few integrated exergy simulators are available for building designers. Undoubtedly, an obstacle for the diffusion of exergy methods is the strong dependency of results on the definition of its 'reference state', a highly controversial issue. Since exergy is the combination of energy and entropy by means of a reference state (also called "reference environment", or "dead state"), the reference choice is crucial. Compared to other classical applications, buildings present two challenging elements: They operate very near to the reference state, which means that small variations have relevant impacts, and their behaviour is dynamical in nature. Not surprisingly then, the reference state definition for the built environment is still debated, especially in the case of dynamic assessments. Among the several characteristics that need to be defined, a crucial decision for a dynamic analysis is between a fixed reference environment (constant in time) and a variable state, which fluctuations follow the local climate. Even if the latter selection is prevailing in research, and recommended by recent and widely-diffused guidelines, the fixed reference has been analytically demonstrated as the only choice which defines exergy as a proper function of the state in a fluctuating environment. This study investigates the impact of that crucial choice: Fixed or variable reference. The basic element of the building energy chain, the envelope, is chosen as the object of investigation as common to any building analysis. Exergy fluctuations in the building envelope of a case study (a typical house located in a Mediterranean climate) are confronted for each time-step of a significant summer day, when the building behaviour is highly dynamical. Exergy efficiencies and fluxes are not familiar numbers, and thus, the more easy-to-imagine concept of exergy storage is used to summarize the results. Trends obtained with a fixed and a variable reference (outside air) are compared, and their meaning is discussed under the light of the underpinning dynamical energy analysis. As a conclusion, a fixed reference state is considered the best choice for dynamic exergy analysis. Even if the fixed reference is generally only contemplated as a simpler selection, and the variable state is often stated as more accurate without explicit justifications, the analytical considerations supporting the adoption of a fixed reference are confirmed by the usefulness and clarity of interpretation of its results. Further discussion is needed to address the conflict between the evidence supporting a fixed reference state and the wide adoption of a fluctuating one. A more robust theoretical framework, including selection criteria of the reference state for dynamical simulations, could push the development of integrated dynamic tools and thus spread exergy analysis for the built environment across the common practice.Keywords: exergy, reference state, dynamic, building
Procedia PDF Downloads 225646 Synthesis and Characterization of pH-Sensitive Graphene Quantum Dot-Loaded Metal-Organic Frameworks for Targeted Drug Delivery and Fluorescent Imaging
Authors: Sayed Maeen Badshah, Kuen-Song Lin, Abrar Hussain, Jamshid Hussain
Abstract:
Liver cancer is a significant global health issue, ranking fifth in incidence and second in mortality. Effective therapeutic strategies are urgently needed to combat this disease, particularly in regions with high prevalence. This study focuses on developing and characterizing fluorescent organometallic frameworks as distinct drug delivery carriers with potential applications in both the treatment and biological imaging of liver cancer. This work introduces two distinct organometallic frameworks: the cake-shaped GQD@NH₂-MIL-125 and the cross-shaped M8U6/FM8U6. The GQD@NH₂-MIL-125 framework is particularly noteworthy for its high fluorescence, making it an effective tool for biological imaging. X-ray diffraction (XRD) analysis revealed specific diffraction peaks at 6.81ᵒ (011), 9.76ᵒ (002), and 11.69ᵒ (121), with an additional significant peak at 26ᵒ (2θ), corresponding to the carbon material. Morphological analysis using Field Emission Scanning Electron Microscopy (FE-SEM), and Transmission Electron Microscopy (TEM) demonstrated that the framework has a front particle size of 680 nm and a side particle size of 55±5 nm. High-resolution TEM (HR-TEM) images confirmed the successful attachment of graphene quantum dots (GQDs) onto the NH2-MIL-125 framework. Fourier-Transform Infrared (FT-IR) spectroscopy identified crucial functional groups within the GQD@NH₂-MIL-125 structure, including O-Ti-O metal bonds within the 500 to 700 cm⁻¹ range, and N-H and C-N bonds at 1,646 cm⁻¹ and 1,164 cm⁻¹, respectively. BET isotherm analysis further revealed a specific surface area of 338.1 m²/g and an average pore size of 46.86 nm. This framework also demonstrated UV-active properties, as identified by UV-visible light spectra, and its photoluminescence (PL) spectra showed an emission peak around 430 nm when excited at 350 nm, indicating its potential as a fluorescent drug delivery carrier. In parallel, the cross-shaped M8U6/FM8U6 frameworks were synthesized and characterized using X-ray diffraction, which identified distinct peaks at 2θ = 7.4 (111), 8.5 (200), 9.2 (002), 10.8 (002), 12.1 (220), 16.7 (103), and 17.1 (400). FE-SEM, HR-TEM, and TEM analyses revealed particle sizes of 350±50 nm for M8U6 and 200±50 nm for FM8U6. These frameworks, synthesized from terephthalic acid (H₂BDC), displayed notable vibrational bonds, such as C=O at 1,650 cm⁻¹, Fe-O in MIL-88 at 520 cm⁻¹, and Zr-O in UIO-66 at 482 cm⁻¹. BET analysis showed specific surface areas of 740.1 m²/g with a pore size of 22.92 nm for M8U6 and 493.9 m²/g with a pore size of 35.44 nm for FM8U6. Extended X-ray Absorption Fine Structure (EXAFS) spectra confirmed the stability of Ti-O bonds in the frameworks, with bond lengths of 2.026 Å for MIL-125, 1.962 Å for NH₂-MIL-125, and 1.817 Å for GQD@NH₂-MIL-125. These findings highlight the potential of these organometallic frameworks for enhanced liver cancer therapy through precise drug delivery and imaging, representing a significant advancement in nanomaterial applications in biomedical science.Keywords: liver cancer cells, metal organic frameworks, Doxorubicin (DOX), drug release.
Procedia PDF Downloads 5645 Development of 3D Printed Natural Fiber Reinforced Composite Scaffolds for Maxillofacial Reconstruction
Authors: Sri Sai Ramya Bojedla, Falguni Pati
Abstract:
Nature provides the best of solutions to humans. One such incredible gift to regenerative medicine is silk. The literature has publicized a long appreciation for silk owing to its incredible physical and biological assets. Its bioactive nature, unique mechanical strength, and processing flexibility make us curious to explore further to apply it in the clinics for the welfare of mankind. In this study, Antheraea mylitta and Bombyx mori silk fibroin microfibers are developed by two economical and straightforward steps via degumming and hydrolysis for the first time, and a bioactive composite is manufactured by mixing silk fibroin microfibers at various concentrations with polycaprolactone (PCL), a biocompatible, aliphatic semi-crystalline synthetic polymer. Reconstructive surgery in any part of the body except for the maxillofacial region deals with replacing its function. But answering both the aesthetics and function is of utmost importance when it comes to facial reconstruction as it plays a critical role in the psychological and social well-being of the patient. The main concern in developing adequate bone graft substitutes or a scaffold is the noteworthy variation in each patient's bone anatomy. Additionally, the anatomical shape and size will vary based on the type of defect. The advent of additive manufacturing (AM) or 3D printing techniques to bone tissue engineering has facilitated overcoming many of the restraints of conventional fabrication techniques. The acquired patient's CT data is converted into a stereolithographic (STL)-file which is further utilized by the 3D printer to create a 3D scaffold structure in an interconnected layer-by-layer fashion. This study aims to address the limitations of currently available materials and fabrication technologies and develop a customized biomaterial implant via 3D printing technology to reconstruct complex form, function, and aesthetics of the facial anatomy. These composite scaffolds underwent structural and mechanical characterization. Atomic force microscopic (AFM) and field emission scanning electron microscopic (FESEM) images showed the uniform dispersion of the silk fibroin microfibers in the PCL matrix. With the addition of silk, there is improvement in the compressive strength of the hybrid scaffolds. The scaffolds with Antheraea mylitta silk revealed higher compressive modulus than that of Bombyx mori silk. The above results of PCL-silk scaffolds strongly recommend their utilization in bone regenerative applications. Successful completion of this research will provide a great weapon in the maxillofacial reconstructive armamentarium.Keywords: compressive modulus, 3d printing, maxillofacial reconstruction, natural fiber reinforced composites, silk fibroin microfibers
Procedia PDF Downloads 193