Search results for: molecular dynamic simulations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7253

Search results for: molecular dynamic simulations

383 Formulation of Lipid-Based Tableted Spray-Congealed Microparticles for Zero Order Release of Vildagliptin

Authors: Hend Ben Tkhayat , Khaled Al Zahabi, Husam Younes

Abstract:

Introduction: Vildagliptin (VG), a dipeptidyl peptidase-4 inhibitor (DPP-4), was proven to be an active agent for the treatment of type 2 diabetes. VG works by enhancing and prolonging the activity of incretins which improves insulin secretion and decreases glucagon release, therefore lowering blood glucose level. It is usually used with various classes, such as insulin sensitizers or metformin. VG is currently only marketed as an immediate-release tablet that is administered twice daily. In this project, we aim to formulate an extended-release with a zero-order profile tableted lipid microparticles of VG that could be administered once daily ensuring the patient’s convenience. Method: The spray-congealing technique was used to prepare VG microparticles. Compritol® was heated at 10 oC above its melting point and VG was dispersed in the molten carrier using a homogenizer (IKA T25- USA) set at 13000 rpm. VG dispersed in the molten Compritol® was added dropwise to the molten Gelucire® 50/13 and PEG® (400, 6000, and 35000) in different ratios under manual stirring. The molten mixture was homogenized and Carbomer® amount was added. The melt was pumped through the two-fluid nozzle of the Buchi® Spray-Congealer (Buchi B-290, Switzerland) using a Pump drive (Master flex, USA) connected to a silicone tubing wrapped with silicone heating tape heated at the same temperature of the pumped mix. The physicochemical properties of the produced VG-loaded microparticles were characterized using Mastersizer, Scanning Electron Microscope (SEM), Differential Scanning Calorimeter (DSC) and X‐Ray Diffractometer (XRD). VG microparticles were then pressed into tablets using a single punch tablet machine (YDP-12, Minhua pharmaceutical Co. China) and in vitro dissolution study was investigated using Agilent Dissolution Tester (Agilent, USA). The dissolution test was carried out at 37±0.5 °C for 24 hours in three different dissolution media and time phases. The quantitative analysis of VG in samples was realized using a validated High-Pressure Liquid Chromatography (HPLC-UV) method. Results: The microparticles were spherical in shape with narrow distribution and smooth surface. DSC and XRD analyses confirmed the crystallinity of VG that was lost after being incorporated into the amorphous polymers. The total yields of the different formulas were between 70% and 80%. The VG content in the microparticles was found to be between 99% and 106%. The in vitro dissolution study showed that VG was released from the tableted particles in a controlled fashion. The adjustment of the hydrophilic/hydrophobic ratio of excipients, their concentration and the molecular weight of the used carriers resulted in tablets with zero-order kinetics. The Gelucire 50/13®, a hydrophilic polymer was characterized by a time-dependent profile with an important burst effect that was decreased by adding Compritol® as a lipophilic carrier to retard the release of VG which is highly soluble in water. PEG® (400,6000 and 35 000) were used for their gelling effect that led to a constant rate delivery and achieving a zero-order profile. Conclusion: Tableted spray-congealed lipid microparticles for extended-release of VG were successfully prepared and a zero-order profile was achieved.

Keywords: vildagliptin, spray congealing, microparticles, controlled release

Procedia PDF Downloads 107
382 Criticism and Theorizing of Architecture and Urbanism in the Creativity Cinematographic Film

Authors: Wafeek Mohamed Ibrahim Mohamed

Abstract:

In the era of globalization, the camera of the cinematographic film plays a very important role in terms of monitoring and documenting what it was and distinguished the built environment of architectural and Urbanism. Moving the audience to the out-going backward through the cinematographic film and its stereophonic screen by which the picture appears at its best and its coexistence reached now its third dimension. The camera has indicated to the city shape with its paths, (alley) lanes, buildings and its architectural style. We have seen the architectural styles in its cinematic scenes which remained a remembrance in its history, in spite of the fact that some of which has been disappearing as what happened to ‘Boulak Bridge’ in Cairo built by ‘Eiffel’ and it has been demolished, but it remains a remembrance we can see it in the films of ’Usta Hassan’and A Crime in the Quiet Neighborhood. The purpose of the fundamental research is an attempt to reach a critical view of the idea of criticism and theorizing for Architecture and Urbanism in the cinematographic film and their relationship and reflection on the ‘audience’ understanding of the public opinion related to our built environment of Architectural and Urbanism with its problems and hardness. It is like as a trial to study the Architecture and Urbanism of the built environment in the cinematographic film and hooking up (linking) a realistic view of the governing conceptual significance thereof. The aesthetic thought of our traditional environment, in a psychological and anthropological framework, derives from the cinematic concept of the Architecture and Urbanism of the place and the dynamics of the space. The architectural space considers the foundation stone of the cinematic story and the main background of the events therein, which integrate the audience into a romantic trip to the city through its symbolized image of the spaces, lanes [alley], etc. This will be done through two main branches: firstly, Reviewing during time pursuit of the Architecture and Urbanism in the cinematographic films the thirties ago in the Egyptian cinema [onset from the film ‘Bab El Hadid’ to the American University at a film of ‘Saidi at the American University’]. The research concludes the importance of the need to study the cinematic films which deal with our societies, their architectural and Urbanism concerns whether the traditional ones or the contemporary and their crisis (such as the housing crisis in the film of ‘Krakoun in the street’, etc) to study the built environment with its architectural dynamic spaces through a modernist view. In addition, using the cinema as an important Media for spreading the ideas, documenting and monitoring the current changes in the built environment through its various dramas and comedies, etc. The cinema is considered as a mirror of the society and its built environment over the epochs. It assured the unique case constituted by cinema with the audience (public opinion) through a sense of emptiness and forming the mental image related to the city and the built environment.

Keywords: architectural and urbanism, cinematographic architectural, film, space in the film, media

Procedia PDF Downloads 214
381 Identification and Characterization of Small Peptides Encoded by Small Open Reading Frames using Mass Spectrometry and Bioinformatics

Authors: Su Mon Saw, Joe Rothnagel

Abstract:

Short open reading frames (sORFs) located in 5’UTR of mRNAs are known as uORFs. Characterization of uORF-encoded peptides (uPEPs) i.e., a subset of short open reading frame encoded peptides (sPEPs) and their translation regulation lead to understanding of causes of genetic disease, proteome complexity and development of treatments. Existence of uORFs within cellular proteome could be detected by LC-MS/MS. The ability of uORF to be translated into uPEP and achievement of uPEP identification will allow uPEP’s characterization, structures, functions, subcellular localization, evolutionary maintenance (conservation in human and other species) and abundance in cells. It is hypothesized that a subset of sORFs are translatable and that their encoded sPEPs are functional and are endogenously expressed contributing to the eukaryotic cellular proteome complexity. This project aimed to investigate whether sORFs encode functional peptides. Liquid chromatography-mass spectrometry (LC-MS) and bioinformatics were thus employed. Due to probable low abundance of sPEPs and small in sizes, the need for efficient peptide enrichment strategies for enriching small proteins and depleting the sub-proteome of large and abundant proteins is crucial for identifying sPEPs. Low molecular weight proteins were extracted using SDS-PAGE from Human Embryonic Kidney (HEK293) cells and Strong Cation Exchange Chromatography (SCX) from secreted HEK293 cells. Extracted proteins were digested by trypsin to peptides, which were detected by LC-MS/MS. The MS/MS data obtained was searched against Swiss-Prot using MASCOT version 2.4 to filter out known proteins, and all unmatched spectra were re-searched against human RefSeq database. ProteinPilot v5.0.1 was used to identify sPEPs by searching against human RefSeq, Vanderperre and Human Alternative Open Reading Frame (HaltORF) databases. Potential sPEPs were analyzed by bioinformatics. Since SDS PAGE electrophoresis could not separate proteins <20kDa, this could not identify sPEPs. All MASCOT-identified peptide fragments were parts of main open reading frame (mORF) by ORF Finder search and blastp search. No sPEP was detected and existence of sPEPs could not be identified in this study. 13 translated sORFs in HEK293 cells by mass spectrometry in previous studies were characterized by bioinformatics. Identified sPEPs from previous studies were <100 amino acids and <15 kDa. Bioinformatics results showed that sORFs are translated to sPEPs and contribute to proteome complexity. uPEP translated from uORF of SLC35A4 was strongly conserved in human and mouse while uPEP translated from uORF of MKKS was strongly conserved in human and Rhesus monkey. Cross-species conserved uORFs in association with protein translation strongly suggest evolutionary maintenance of coding sequence and indicate probable functional expression of peptides encoded within these uORFs. Translation of sORFs was confirmed by mass spectrometry and sPEPs were characterized with bioinformatics.

Keywords: bioinformatics, HEK293 cells, liquid chromatography-mass spectrometry, ProteinPilot, Strong Cation Exchange Chromatography, SDS-PAGE, sPEPs

Procedia PDF Downloads 167
380 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 74
379 Poly(Methyl Methacrylate) Degradation Products and Its in vitro Cytotoxicity Evaluation in NIH3T3 Cells

Authors: Lesly Y Carmona-Sarabia, Luisa Barraza-Vergara, Vilmalí López-Mejías, Wandaliz Torres-García, Maribella Domenech-Garcia, Madeline Torres-Lugo

Abstract:

Biosensors are used in many applications providing real-time monitoring to treat long-term conditions. Thus, understanding the physicochemical properties and biological side effects on the skin of polymers (e. g., poly(methyl methacrylate), PMMA) employed in the fabrication of wearable biosensors is crucial for the selection of manufacturing materials within this field. The PMMA (hydrophobic and thermoplastic polymer) is commonly employed as a coating material or substrate in the fabrication of wearable devices. The cytotoxicityof PMMA (including residual monomers or degradation products) on the skin, in terms of cells and tissue, is required to prevent possible adverse effects (cell death, skin reactions, sensitization) on human health. Within this work, accelerated aging of PMMA (Mw ~ 15000) through thermal and photochemical degradation was under-taken. The accelerated aging of PMMA was carried out by thermal (200°C, 1h) and photochemical degradation (UV-Vis, 8-15d) adapted employing ISO protocols (ISO-10993-12, ISO-4892-1:2016, ISO-877-1:2009, ISO-188: 2011). In addition, in vitro cytotoxicity evaluation of PMMA degradation products was performed using NIH3T3 fibroblast cells to assess the response of skin tissues (in terms of cell viability) exposed with polymers utilized to manufacture wearable biosensors, such as PMMA. The PMMA (Mw ~ 15000) before and after accelerated aging experiments was characterized by thermal gravimetric analysis (TGA), differential scanning calorimetric (DSC), powder X-ray diffractogram (PXRD), and scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS) to determine and verify the successful degradation of this polymer under the specific conditions previously mention. The degradation products were characterized through nuclear magnetic resonance (NMR) to identify possible byproducts generated after the accelerated aging. Results demonstrated a percentage (%) weight loss between 1.5-2.2% (TGA thermographs) for PMMA after accelerated aging. The EDS elemental analysis reveals a 1.32 wt.% loss of carbon for PMMA after thermal degradation. These results might be associated with the amount (%) of PMMA degrade after the accelerated aging experiments. Furthermore, from the thermal degradation products was detected the presence of the monomer and methyl formate (low concentrations) and a low molecular weight radical (·COOCH3) in higher concentrations by NMR. In the photodegradation products, methyl formate was detected in higher concentrations. These results agree with the proposed thermal or photochemical degradation mechanisms found in the literature.1,2 Finally, significant cytotoxicity on the NIH3T3 cells was obtained for the thermal and photochemical degradation products. A decrease in cell viability by > 90% (stock solutions) was observed. It is proposed that the presence of byproducts (e.g. methyl formate or radicals such as ·COOCH₃) from the PMMA degradation might be responsible for the cytotoxicity observed in the NIH3T3 fibroblast cells. Additionally, experiments using skin models will be employed to compare with the NIH3T3 fibroblast cells model.

Keywords: biosensors, polymer, skin irritation, degradation products, cell viability

Procedia PDF Downloads 118
378 Transforming the Education System for the Innovative Society: A Case Study

Authors: Mario Chiasson, Monique Boudreau

Abstract:

Problem statement: Innovation in education has become a central topic of discussion at various levels, including schools and scholarly literature, driven by the global technological advancements of Industry 4.0. This study aims to contribute to the ongoing dialogue by examining the role of innovation in transforming school culture through the reimagination of traditional structures. The study argues that such a transformation necessitates an understanding and experience of systems leadership. This paper presents the case of the Francophone South School District, where a transformative initiative created an innovative learning environment by engaging students, teachers, and community members collaboratively through eco-communities. Traditional barriers and structures in education were dismantled to facilitate this process. The research component of this paper focuses on the Intr’Appreneur project, a unique initiative launched by the district team in the New Brunswick, Canada to support a system-wide transformation towards progressive and innovative organizational models. Methods This study is part of a larger research project that focuses on the transformation of educational systems in six pilot schools involved in the Intr’Appreneur project. Due to COVID-19 restrictions, the project was downscaled to three schools, and virtual qualitative interviews were conducted with volunteer teachers and administrators. Data was collected from students, teachers, and principals regarding their perceptions of the new learning environment and experiences. The analysis process involved developing categories, establishing codes for emerging themes, and validating the findings. The study emphasizes the importance of system leadership in achieving successful transformation. Results: The findings demonstrate that school principals played a vital role in enabling system-wide change by fostering a dynamic, collaborative, and inclusive culture, coordinating and mobilizing community members, and serving as educational role models who facilitated active and personalized pedagogy among the teaching staff. These qualities align with the characteristics of Leadership 4.0 and are crucial for successful school system transformations. Conclusion: This paper emphasizes the importance of systems leadership in driving educational transformations that extend beyond pedagogical and technological advancements. The research underscores the potential impact of such a leadership approach on teaching, learning, and leading processes in Education 4.0.

Keywords: leadership, system transformation, innovation, innovative learning environment, Education 4.0, system leadership

Procedia PDF Downloads 52
377 Performance Evaluation of Fingerprint, Auto-Pin and Password-Based Security Systems in Cloud Computing Environment

Authors: Emmanuel Ogala

Abstract:

Cloud computing has been envisioned as the next-generation architecture of Information Technology (IT) enterprise. In contrast to traditional solutions where IT services are under physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the management of the data and services may not be fully trustworthy. This is due to the fact that the systems are opened to the whole world and as people tries to have access into the system, many people also are there trying day-in day-out on having unauthorized access into the system. This research contributes to the improvement of cloud computing security for better operation. The work is motivated by two problems: first, the observed easy access to cloud computing resources and complexity of attacks to vital cloud computing data system NIC requires that dynamic security mechanism evolves to stay capable of preventing illegitimate access. Second; lack of good methodology for performance test and evaluation of biometric security algorithms for securing records in cloud computing environment. The aim of this research was to evaluate the performance of an integrated security system (ISS) for securing exams records in cloud computing environment. In this research, we designed and implemented an ISS consisting of three security mechanisms of biometric (fingerprint), auto-PIN and password into one stream of access control and used for securing examination records in Kogi State University, Anyigba. Conclusively, the system we built has been able to overcome guessing abilities of hackers who guesses people password or pin. We are certain about this because the added security system (fingerprint) needs the presence of the user of the software before a login access can be granted. This is based on the placement of his finger on the fingerprint biometrics scanner for capturing and verification purpose for user’s authenticity confirmation. The study adopted the conceptual of quantitative design. Object oriented and design methodology was adopted. In the analysis and design, PHP, HTML5, CSS, Visual Studio Java Script, and web 2.0 technologies were used to implement the model of ISS for cloud computing environment. Note; PHP, HTML5, CSS were used in conjunction with visual Studio front end engine design tools and MySQL + Access 7.0 were used for the backend engine and Java Script was used for object arrangement and also validation of user input for security check. Finally, the performance of the developed framework was evaluated by comparing with two other existing security systems (Auto-PIN and password) within the school and the results showed that the developed approach (fingerprint) allows overcoming the two main weaknesses of the existing systems and will work perfectly well if fully implemented.

Keywords: performance evaluation, fingerprint, auto-pin, password-based, security systems, cloud computing environment

Procedia PDF Downloads 118
376 Comparative Analysis on the Evolution of Chlorinated Solvents Pollution in Granular Aquifers and Transition Zones to Aquitards

Authors: José M. Carmona, Diana Puigserver, Jofre Herrero

Abstract:

Chlorinated solvents belong to the group of nonaqueous phase liquids (DNAPL) and have been involved in many contamination episodes. They are carcinogenic and recalcitrant pollutants that may be found in granular aquifers as: i) pools accumulated on low hydraulic conductivity layers; ii) immobile residual phase retained at the pore-scale by capillary forces; iii) dissolved phase in groundwater; iv) sorbed by particulate organic matter; and v) stored into the matrix of low hydraulic conductivity layers where they penetrated by molecular diffusion. The transition zone between granular aquifers and basal aquitards constitute the lowermost part of the aquifer and presents numerous fine-grained interbedded layers that give rise to significant textural contrasts. These layers condition the transport and fate of contaminants and lead to differences from the rest of the aquifer, given that: i) hydraulic conductivity of these layers is lower; ii) DNAPL tends to accumulate on them; iii) groundwater flow is slower in the transition zone and consequently pool dissolution is much slower; iv) sorbed concentrations are higher in the fine-grained layers because of their higher content in organic matter; v) a significant mass of pollutant penetrates into the matrix of these layers; and vi) this contaminant mass back-diffuses after remediation and the aquifer becomes contaminated again. Thus, contamination sources of chlorinated solvents are extremely more recalcitrant in transition zones, which has far-reaching implications for the environment. The aim of this study is to analyze the spatial and temporal differences in the evolution of biogeochemical processes in the transition zone and in the rest of the aquifer. For this, an unconfined aquifer with a transition zone in the lower part was selected at Vilafant (NE Spain). This aquifer was contaminated by perchloroethylene (PCE) in the 80’s. Distribution of PCE and other chloroethenes in groundwater and porewater was analyzed in: a) conventional piezometers along the plume and in two multilevel wells at the source of contamination; and b) porewater of fine grained materials from cores recovered when drilled the two multilevel wells. Currently, the highest concentrations continue to be recorded in the source area in the transition zone. By contrast, the lowest concentrations in this area correspond to the central part of the aquifer, where flow velocities are higher and a greater washing of the residual phase initially retained has occurred. The major findings of the study were: i) PCE metabolites were detected in the transition zone, where conditions were more reducing than in the rest of the aquifer; ii) however, reductive dechlorination was partial since only the formation of cis-dicholoroethylene (DCE) was reached; iii) In the central part of the aquifer, where conditions were predominantly oxidizing, the presence of nitrate significantly hindered the reductive declination of PCE. The remediation strategies to be implemented should be directed to enhance dissolution of the source, especially in the transition zone, where it is more recalcitrant. For example, by combining chemical and bioremediation methods, already tested at the laboratory scale with groundwater and sediments of this site.

Keywords: chlorinated solvents, chloroethenes, DNAPL, partial reductive dechlorination, PCE, transition zone to basal aquitard

Procedia PDF Downloads 130
375 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 114
374 Challenges of Strategies for Improving Sustainability in Urban Historical Context in Developing Countries: The Case of Shiraz Bein Al-Haramein

Authors: Amir Hossein Ashari, Sedighe Erfan Manesh

Abstract:

One of the problems in developing countries is renovating the historical context and inducing behaviors appropriate to modern life to such a context. This study was conducted using field and library methods in 2012. Similar cases carried out in Iran and developing countries were compared to unveil the strengths and weaknesses of these projects. At present, in the historical context of Shiraz, the distance between two religious shrines of Shahcheragh (Ahmad ibn Musa) and Astaneh (Sayed Alaa al-Din Hossein), which are significant places in religious, cultural, social, and economic terms, is an area full of historic places called Bein Al-Haramein. Unfortunately, some of these places have been worn out and are not appropriate for common uses. The basic strategy of Bein Al-Haramein was to improve social development of Shiraz, to enhance the vitality and dynamism of the historical context of Bein Al-Haramein and to create tourist attractions in order to boost the city's economic and social stability. To this end, the project includes the huge Bein Al-Haramein Commercial Complex which is under construction now. To construct the complex, officials have decided to demolish places of historical value which can lead to irreparable consequences. Iranian urban design has always been based on three elements of bazaars, mosques and government facilities with bazaars being the organic connector of the other elements. Therefore, the best strategy in the above case is to provide for a commercial connection between the two poles. Although this strategy is included in the project, lack of attention to renovation principles in this area and complete destruction of the context will lead to its irreversible damage and will destroy its cultural and historical identity. In urban planning of this project, some important issues have been neglected including: preserving valuable buildings and special old features of the city, rebuilding worn buildings and context to attract trust and confidence of the people, developing new models according to changes, improving the structural position of old context with minimal degradation, attracting partnerships of residents and protecting their rights and finally using potential facilities of the old context. The best strategy for achieving sustainability in Bein Al-Haramein can be the one used in the distance between Santa Maria Novella and Santa Maria Del Fiore churches in historical context where while protecting the historic context and constructions, old buildings were renovated and given different commercial and service uses making them sustainable and dynamic places. Similarly, in Bein Al-Haramein, renovating old constructions and monuments and giving different commercial and other uses to them can help improve the economic and social sustainability of the area.

Keywords: Bein Al-Haramein, sustainability, historical context, historical context

Procedia PDF Downloads 421
373 Development of a Quick On-Site Pass/Fail Test for the Evaluation of Fresh Concrete Destined for Application as Exposed Concrete

Authors: Laura Kupers, Julie Piérard, Niki Cauberg

Abstract:

The use of exposed concrete (sometimes referred to as architectural concrete), keeps gaining popularity. Exposed concrete has the advantage to combine the structural properties of concrete with an aesthetic finish. However, for a successful aesthetic finish, much attention needs to be paid to the execution (formwork, release agent, curing, weather conditions…), the concrete composition (choice of the raw materials and mix proportions) as well as to its fresh properties. For the latter, a simple on-site pass/fail test could halt the casting of concrete not suitable for architectural concrete and thus avoid expensive repairs later. When architects opt for an exposed concrete, they usually want a smooth, uniform and nearly blemish-free surface. For this choice, a standard ‘construction’ concrete does not suffice. An aesthetic surface finishing requires the concrete to contain a minimum content of fines to minimize the risk of segregation and to allow complete filling of more complex shaped formworks. The concrete may neither be too viscous as this makes it more difficult to compact and it increases the risk of blow holes blemishing the surface. On the other hand, too much bleeding may cause color differences on the concrete surface. An easy pass/fail test, which can be performed on the site just before the casting, could avoid these problems. In case the fresh concrete fails the test, the concrete can be rejected. Only in case the fresh concrete passes the test, the concrete would be cast. The pass/fail tests are intended for a concrete with a consistency class S4. Five tests were selected as possible onsite pass/fail test. Two of these tests already exist: the K-slump test (ASTM C1362) and the Bauer Filter Press Test. The remaining three tests were developed by the BBRI in order to test the segregation resistance of fresh concrete on site: the ‘dynamic sieve stability test’, the ‘inverted cone test’ and an adapted ‘visual stability index’ (VSI) for the slump and flow test. These tests were inspired by existing tests for self-compacting concrete, for which the segregation resistance is of great importance. The suitability of the fresh concrete mixtures was also tested by means of a laboratory reference test (resistance to segregation) and by visual inspection (blow holes, structure…) of small test walls. More than fifteen concrete mixtures of different quality were tested. The results of the pass/fail tests were compared with the results of this laboratory reference test and the test walls. The preliminary laboratory results indicate that concrete mixtures ‘suitable’ for placing as exposed concrete (containing sufficient fines, a balanced grading curve etc.) can be distinguished from ‘inferior’ concrete mixtures. Additional laboratory tests, as well as tests on site, will be conducted to confirm these preliminary results and to set appropriate pass/fail values.

Keywords: exposed concrete, testing fresh concrete, segregation resistance, bleeding, consistency

Procedia PDF Downloads 404
372 Gas Systems of the Amadeus Basin, Australia

Authors: Chris J. Boreham, Dianne S. Edwards, Amber Jarrett, Justin Davies, Robert Poreda, Alex Sessions, John Eiler

Abstract:

The origins of natural gases in the Amadeus Basin have been assessed using molecular and stable isotope (C, H, N, He) systematics. A dominant end-member thermogenic, oil-associated gas is considered for the Ordovician Pacoota−Stairway sandstones of the Mereenie gas and oil field. In addition, an abiogenic end-member is identified in the latest Proterozoic lower Arumbera Sandstone of the Dingo gasfield, being most likely associated with radiolysis of methane with polymerisation to wet gases. The latter source assignment is based on a similar geochemical fingerprint derived from the laboratory gamma irradiation experiments on methane. A mixed gas source is considered for the Palm Valley gasfield in the Ordovician Pacoota Sandstone. Gas wetness (%∑C₂−C₅/∑C₁−C₅) decreases in the order Mereenie (19.1%) > Palm Valley (9.4%) > Dingo (4.1%). Non-produced gases at Magee-1 (23.5%; Late Proterozoic Heavitree Quartzite) and Mount Kitty-1 (18.9%; Paleo-Mesoproterozoic fractured granitoid basement) are very wet. Methane thermometry based on clumped isotopes of methane (¹³CDH₃) is consistent with the abiogenic origin for the Dingo gas field with methane formation temperature of 254ᵒC. However, the low methane formation temperature of 57°C for the Mereenie gas suggests either a mixed thermogenic-biogenic methane source or there is no thermodynamic equilibrium between the methane isotopomers. The shallow reservoir depth and present-day formation temperature below 80ᵒC would support microbial methanogenesis, but there is no accompanying alteration of the C- and H-isotopes of the wet gases and CO₂ that is typically associated with biodegradation. The Amadeus Basin gases show low to extremely high inorganic gas contents. Carbon dioxide is low in abundance (< 1% CO₂) and becomes increasing depleted in ¹³C from the Palm Valley (av. δ¹³C 0‰) to the Mereenie (av. δ¹³C -6.6‰) and Dingo (av. δ¹³C -14.3‰) gas fields. Although the wide range in carbon isotopes for CO₂ is consistent with multiple origins from inorganic to organic inputs, the most likely process is fluid-rock alteration with enrichment in ¹²C in the residual gaseous CO₂ accompanying progressive carbonate precipitation within the reservoir. Nitrogen ranges from low−moderate (1.7−9.9% N₂) abundance (Palm Valley av. 1.8%; Mereenie av. 9.1%; Dingo av. 9.4%) to extremely high abundance in Magee-1 (43.6%) and Mount Kitty-1 (61.0%). The nitrogen isotopes for the production gases have δ¹⁵N = -3.0‰ for Mereenie, -3.0‰ for Palm Valley and -7.1‰ for Dingo, suggest all being mixed inorganic and thermogenic nitrogen sources. Helium (He) abundance varies over a wide range from a low of 0.17% to one of the world’s highest at 9% (Mereenie av. 0.23%; Palm Valley av. 0.48%, Dingo av. 0.18%, Magee-1 6.2%; Mount Kitty-1 9.0%). Complementary helium isotopes (R/Ra = ³He/⁴Hesample / ³He/⁴Heair) range from 0.013 to 0.031 R/Ra, indicating a dominant crustal origin for helium with a sustained input of radiogenic 4He from the decomposition of U- and Th-bearing minerals, effectively diluting any original mantle helium input. The high helium content in the non-produced gases compared to the shallower producing wells most likely reflects their stratigraphic position relative to the Tonian Bitter Springs Group with the former below and the latter above an effective carbonate-salt seal.

Keywords: amadeus gas, thermogenic, abiogenic, C, H, N, He isotopes

Procedia PDF Downloads 175
371 Increasing Prevalence of Multi-Allergen Sensitivities in Patients with Allergic Rhinitis and Asthma in Eastern India

Authors: Sujoy Khan

Abstract:

There is a rising concern with increasing allergies affecting both adults and children in rural and urban India. Recent report on adults in a densely populated North Indian city showed sensitization rates for house dust mite, parthenium, and cockroach at 60%, 40% and 18.75% that is now comparable to allergy prevalence in cities in the United States. Data from patients residing in the eastern part of India is scarce. A retrospective study (over 2 years) was done on patients with allergic rhinitis and asthma where allergen-specific IgE levels were measured to see the aero-allergen sensitization pattern in a large metropolitan city of East India. Total IgE and allergen-specific IgE levels were measured using ImmunoCAP (Phadia 100, Thermo Fisher Scientific, Sweden) using region-specific aeroallergens: Dermatophagoides pteronyssinus (d1); Dermatophagoides farinae (d2); cockroach (i206); grass pollen mix (gx2) consisted of Cynodon dactylon, Lolium perenne, Phleum pratense, Poa pratensis, Sorghum halepense, Paspalum notatum; tree pollen mix (tx3) consisted of Juniperus sabinoides, Quercus alba, Ulmus americana, Populus deltoides, Prosopis juliflora; food mix 1 (fx1) consisted of Peanut, Hazel nut, Brazil nut, Almond, Coconut; mould mix (mx1) consisted of Penicillium chrysogenum, Cladosporium herbarum, Aspergillus fumigatus, Alternaria alternate; animal dander mix (ex1) consisted of cat, dog, cow and horse dander; and weed mix (wx1) consists of Ambrosia elatior, Artemisia vulgaris, Plantago lanceolata, Chenopodium album, Salsola kali, following manufacturer’s instructions. As the IgE levels were not uniformly distributed, median values were used to represent the data. 92 patients with allergic rhinitis and asthma (united airways disease) were studied over 2 years including 21 children (age < 12 years) who had total IgE and allergen-specific IgE levels measured. The median IgE level was higher in 2016 than in 2015 with 60% of patients (adults and children) being sensitized to house dust mite (dual positivity for Dermatophagoides pteronyssinus and farinae). Of 11 children in 2015, whose total IgE ranged from 16.5 to >5000 kU/L, 36% of children were polysensitized (≥4 allergens), and 55% were sensitized to dust mites. Of 10 children in 2016, total IgE levels ranged from 37.5 to 2628 kU/L, and 20% were polysensitized with 60% sensitized to dust mites. Mould sensitivity was 10% in both of the years in the children studied. A consistent finding was that ragweed sensitization (molecular homology to Parthenium hysterophorus) appeared to be increasing across all age groups, and throughout the year, as reported previously by us where 25% of patients were sensitized. In the study sample overall, sensitizations to dust mite, cockroach, and parthenium were important risks in our patients with moderate to severe asthma that reinforces the importance of controlling indoor exposure to these allergens. Sensitizations to dust mite, cockroach and parthenium allergens are important predictors of asthma morbidity not only among children but also among adults in Eastern India.

Keywords: aAeroallergens, asthma, dust mite, parthenium, rhinitis

Procedia PDF Downloads 175
370 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 142
369 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers

Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek

Abstract:

Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.

Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations

Procedia PDF Downloads 108
368 A Markov Model for the Elderly Disability Transition and Related Factors in China

Authors: Huimin Liu, Li Xiang, Yue Liu, Jing Wang

Abstract:

Background: As one of typical case for the developing countries who are stepping into the aging times globally, more and more older people in China might face the problem of which they could not maintain normal life due to the functional disability. While the government take efforts to build long-term care system and further carry out related policies for the core concept, there is still lack of strong evidence to evaluating the profile of disability states in the elderly population and its transition rate. It has been proved that disability is a dynamic condition of the person rather than irreversible so it means possible to intervene timely on them who might be in a risk of severe disability. Objective: The aim of this study was to depict the picture of the disability transferring status of the older people in China, and then find out individual characteristics that change the state of disability to provide theory basis for disability prevention and early intervention among elderly people. Methods: Data for this study came from the 2011 baseline survey and the 2013 follow-up survey of the China Health and Retirement Longitudinal Study (CHARLS). Normal ADL function, 1~2 ADLs disability,3 or above ADLs disability and death were defined from state 1 to state 4. Multi-state Markov model was applied and the four-state homogeneous model with discrete states and discrete times from two visits follow-up data was constructed to explore factors for various progressive stages. We modeled the effect of explanatory variables on the rates of transition by using a proportional intensities model with covariate, such as gender. Result: In the total sample, state 2 constituent ratio is nearly about 17.0%, while state 3 proportion is blow the former, accounting for 8.5%. Moreover, ADL disability statistics difference is not obvious between two years. About half of the state 2 in 2011 improved to become normal in 2013 even though they get elder. However, state 3 transferred into the proportion of death increased obviously, closed to the proportion back to state 2 or normal functions. From the estimated intensities, we see the older people are eleven times as likely to develop at 1~2 ADLs disability than dying. After disability onset (state 2), progression to state 3 is 30% more likely than recovery. Once in state 3, a mean of 0.76 years is spent before death or recovery. In this model, a typical person in state 2 has a probability of 0.5 of disability-free one year from now while the moderate disabled or above has a probability of 0.14 being dead. Conclusion: On the long-term care cost considerations, preventive programs for delay the disability progression of the elderly could be adopted based on the current disabled state and main factors of each stage. And in general terms, those focusing elderly individuals who are moderate or above disabled should go first.

Keywords: Markov model, elderly people, disability, transition intensity

Procedia PDF Downloads 273
367 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies

Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof

Abstract:

Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.

Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics

Procedia PDF Downloads 132
366 Immunoliposome-Mediated Drug Delivery to Plasmodium-Infected and Non-Infected Red Blood Cells as a Dual Therapeutic/Prophylactic Antimalarial Strategy

Authors: Ernest Moles, Patricia Urbán, María Belén Jiménez-Díaz, Sara Viera-Morilla, Iñigo Angulo-Barturen, Maria Antònia Busquets, Xavier Fernàndez-Busquets

Abstract:

Bearing in mind the absence of an effective vaccine against malaria and its severe clinical manifestations causing nearly half a million deaths every year, this disease represents nowadays a major threat to life. Besides, the basic rationale followed by currently marketed antimalarial approaches is based on the administration of drugs on their own, promoting the emergence of drug-resistant parasites owing to the limitation in delivering drug payloads into the parasitized erythrocyte high enough to kill the intracellular pathogen while minimizing the risk of causing toxic side effects to the patient. Such dichotomy has been successfully addressed through the specific delivery of immunoliposome (iLP)-encapsulated antimalarials to Plasmodium falciparum-infected red blood cells (pRBCs). Unfortunately, this strategy has not progressed towards clinical applications, whereas in vitro assays rarely reach drug efficacy improvements above 10-fold. Here, we show that encapsulation efficiencies reaching >96% can be achieved for the weakly basic drugs chloroquine (CQ) and primaquine using the pH gradient active loading method in liposomes composed of neutrally charged, saturated phospholipids. Targeting antibodies are best conjugated through their primary amino groups, adjusting chemical crosslinker concentration to retain significant antigen recognition. Antigens from non-parasitized RBCs have also been considered as targets for the intracellular delivery of drugs not affecting the erythrocytic metabolism. Using this strategy, we have obtained unprecedented nanocarrier targeting to early intraerythrocytic stages of the malaria parasite for which there is a lack of specific extracellular molecular tags. Polyethylene glycol-coated liposomes conjugated with monoclonal antibodies specific for the erythrocyte surface protein glycophorin A (anti-GPA iLP) were capable of targeting 100% RBCs and pRBCs at the low concentration of 0.5 μM total lipid in the culture, with >95% of added iLPs retained into the cells. When exposed for only 15 min to P. falciparum in vitro cultures synchronized at early stages, free CQ had no significant effect over parasite viability up to 200 nM drug, whereas iLP-encapsulated 50 nM CQ completely arrested its growth. Furthermore, when assayed in vivo in P. falciparum-infected humanized mice, anti-GPA iLPs cleared the pathogen below detectable levels at a CQ dose of 0.5 mg/kg. In comparison, free CQ administered at 1.75 mg/kg was, at most, 40-fold less efficient. Our data suggest that this significant improvement in drug antimalarial efficacy is in part due to a prophylactic effect of CQ found by the pathogen in its host cell right at the very moment of invasion.

Keywords: immunoliposomal nanoparticles, malaria, prophylactic-therapeutic polyvalent activity, targeted drug delivery

Procedia PDF Downloads 348
365 Human Creativity through Dooyeweerd's Philosophy: The Case of Creative Diagramming

Authors: Kamaran Fathulla

Abstract:

Human creativity knows no bounds. More than a millennia ago humans have expressed their knowledge on cave walls and on clay artefacts. Using visuals such as diagrams and paintings have always provided us with a natural and intuitive medium for expressing such creativity. Making sense of human generated visualisation has been influenced by western scientific philosophies which are often reductionist in their nature. Theoretical frameworks such as those delivered by Peirce dominated our views of how to make sense of visualisation where a visual is seen as an emergent property of our thoughts. Others have reduced the richness of human-generated visuals to mere shapes drawn on a piece of paper or on a screen. This paper introduces an alternate framework where the centrality of human functioning is given explicit and richer consideration through the multi aspectual philosophical works of Herman Dooyeweerd. Dooyeweerd's framework of understanding reality was based on fifteen aspects of reality, each having a distinct core meaning. The totality of the aspects formed a ‘rainbow’ like spectrum of meaning. The thesis of this approach is that meaningful human functioning in most cases involves the diversity of all aspects working in synergy and harmony. Illustration of the foundations and applicability of this approach is underpinned in the case of humans use of diagramming for creative purposes, particularly within an educational context. Diagrams play an important role in education. Students and lecturers use diagrams as a powerful tool to aid their thinking. However, research into the role of diagrams used in education continues to reveal difficulties students encounter during both processes of interpretation and construction of diagrams. Their main problems shape up students difficulties with diagrams. The ever-increasing diversity of diagrams' types coupled with the fact that most real-world diagrams often contain a mix of these different types of diagrams such as boxes and lines, bar charts, surfaces, routes, shapes dotted around the drawing area, and so on with each type having its own distinct set of static and dynamic semantics. We argue that the persistence of these problems is grounded in our existing ways of understanding diagrams that are often reductionist in their underpinnings driven by a single perspective or formalism. In this paper, we demonstrate the limitations of these approaches in dealing with the three problems. Consequently, we propose, discuss, and demonstrate the potential of a nonreductionist framework for understanding diagrams based on Symbolic and Spatial Mappings (SySpM) underpinned by Dooyeweerd philosophy. The potential of the framework to account for the meaning of diagrams is demonstrated by applying it to a real-world case study physics diagram.

Keywords: SySpM, drawing style, mapping

Procedia PDF Downloads 222
364 Defective Autophagy Disturbs Neural Migration and Network Activity in hiPSC-Derived Cockayne Syndrome B Disease Models

Authors: Julia Kapr, Andrea Rossi, Haribaskar Ramachandran, Marius Pollet, Ilka Egger, Selina Dangeleit, Katharina Koch, Jean Krutmann, Ellen Fritsche

Abstract:

It is widely acknowledged that animal models do not always represent human disease. Especially human brain development is difficult to model in animals due to a variety of structural and functional species-specificities. This causes significant discrepancies between predicted and apparent drug efficacies in clinical trials and their subsequent failure. Emerging alternatives based on 3D in vitro approaches, such as human brain spheres or organoids, may in the future reduce and ultimately replace animal models. Here, we present a human induced pluripotent stem cell (hiPSC)-based 3D neural in a vitro disease model for the Cockayne Syndrome B (CSB). CSB is a rare hereditary disease and is accompanied by severe neurologic defects, such as microcephaly, ataxia and intellectual disability, with currently no treatment options. Therefore, the aim of this study is to investigate the molecular and cellular defects found in neural hiPSC-derived CSB models. Understanding the underlying pathology of CSB enables the development of treatment options. The two CSB models used in this study comprise a patient-derived hiPSC line and its isogenic control as well as a CSB-deficient cell line based on a healthy hiPSC line (IMR90-4) background thereby excluding genetic background-related effects. Neurally induced and differentiated brain sphere cultures were characterized via RNA Sequencing, western blot (WB), immunocytochemistry (ICC) and multielectrode arrays (MEAs). CSB-deficiency leads to an altered gene expression of markers for autophagy, focal adhesion and neural network formation. Cell migration was significantly reduced and electrical activity was significantly increased in the disease cell lines. These data hint that the cellular pathologies is possibly underlying CSB. By induction of autophagy, the migration phenotype could be partially rescued, suggesting a crucial role of disturbed autophagy in defective neural migration of the disease lines. Altered autophagy may also lead to inefficient mitophagy. Accordingly, disease cell lines were shown to have a lower mitochondrial base activity and a higher susceptibility to mitochondrial stress induced by rotenone. Since mitochondria play an important role in neurotransmitter cycling, we suggest that defective mitochondria may lead to altered electrical activity in the disease cell lines. Failure to clear the defective mitochondria by mitophagy and thus missing initiation cues for new mitochondrial production could potentiate this problem. With our data, we aim at establishing a disease adverse outcome pathway (AOP), thereby adding to the in-depth understanding of this multi-faced disorder and subsequently contributing to alternative drug development.

Keywords: autophagy, disease modeling, in vitro, pluripotent stem cells

Procedia PDF Downloads 103
363 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 130
362 Implementation of Synthesis and Quality Control Procedures of ¹⁸F-Fluoromisonidazole Radiopharmaceutical

Authors: Natalia C. E. S. Nascimento, Mercia L. Oliveira, Fernando R. A. Lima, Leonardo T. C. do Nascimento, Marina B. Silveira, Brigida G. A. Schirmer, Andrea V. Ferreira, Carlos Malamut, Juliana B. da Silva

Abstract:

Tissue hypoxia is a common characteristic of solid tumors leading to decreased sensitivity to radiotherapy and chemotherapy. In the clinical context, tumor hypoxia assessment employing the positron emission tomography (PET) tracer ¹⁸F-fluoromisonidazole ([¹⁸F]FMISO) is helpful for physicians for planning and therapy adjusting. The aim of this work was to implement the synthesis of 18F-FMISO in a TRACERlab® MXFDG module and also to establish the quality control procedure. [¹⁸F]FMISO was synthesized at Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN/Brazil) using an automated synthesizer (TRACERlab® MXFDG, GE) adapted for the production of [¹⁸F]FMISO. The FMISO chemical standard was purchased from ABX. 18O- enriched water was acquired from Center of Molecular Research. Reagent kits containing eluent solution, acetonitrile, ethanol, 2.0 M HCl solution, buffer solution, water for injections and [¹⁸F]FMISO precursor (dissolved in 2 ml acetonitrile) were purchased from ABX. The [¹⁸F]FMISO samples were purified by Solid Phase Extraction method. The quality requirements of [¹⁸F]FMISO are established in the European Pharmacopeia. According to that reference, quality control of [¹⁸F]FMISO should include appearance, pH, radionuclidic identity and purity, radiochemical identity and purity, chemical purity, residual solvents, bacterial endotoxins, and sterility. The duration of the synthesis process was 53 min, with radiochemical yield of (37.00 ± 0.01) % and the specific activity was more than 70 GBq/µmol. The syntheses were reproducible and showed satisfactory results. In relation to the quality control analysis, the samples were clear and colorless at pH 6.0. The spectrum emission, measured by using a High-Purity Germanium Detector (HPGe), presented a single peak at 511 keV and the half-life, determined by the decay method in an activimeter, was (111.0 ± 0.5) min, indicating no presence of radioactive contaminants, besides the desirable radionuclide (¹⁸F). The samples showed concentration of tetrabutylammonium (TBA) < 50μg/mL, assessed by visual comparison to TBA standard applied in the same thin layer chromatographic plate. Radiochemical purity was determined by high performance liquid chromatography (HPLC) and the results were 100%. Regarding the residual solvents tested, ethanol and acetonitrile presented concentration lower than 10% and 0.04%, respectively. Healthy female mice were injected via lateral tail vein with [¹⁸F]FMISO, microPET imaging studies (15 min) were performed after 2 h post injection (p.i), and the biodistribution was analyzed in five-time points (30, 60, 90, 120 and 180 min) after injection. Subsequently, organs/tissues were assayed for radioactivity with a gamma counter. All parameters of quality control test were in agreement to quality criteria confirming that [¹⁸F]FMISO was suitable for use in non-clinical and clinical trials, following the legal requirements for the production of new radiopharmaceuticals in Brazil.

Keywords: automatic radiosynthesis, hypoxic tumors, pharmacopeia, positron emitters, quality requirements

Procedia PDF Downloads 174
361 Legal Considerations in Fashion Modeling: Protecting Models' Rights and Ensuring Ethical Practices

Authors: Fatemeh Noori

Abstract:

The fashion industry is a dynamic and ever-evolving realm that continuously shapes societal perceptions of beauty and style. Within this industry, fashion modeling plays a crucial role, acting as the visual representation of brands and designers. However, behind the glamorous façade lies a complex web of legal considerations that govern the rights, responsibilities, and ethical practices within the field. This paper aims to explore the legal landscape surrounding fashion modeling, shedding light on key issues such as contract law, intellectual property, labor rights, and the increasing importance of ethical considerations in the industry. Fashion modeling involves the collaboration of various stakeholders, including models, designers, agencies, and photographers. To ensure a fair and transparent working environment, it is imperative to establish a comprehensive legal framework that addresses the rights and obligations of each party involved. One of the primary legal considerations in fashion modeling is the contractual relationship between models and agencies. Contracts define the terms of engagement, including payment, working conditions, and the scope of services. This section will delve into the essential elements of modeling contracts, the negotiation process, and the importance of clarity to avoid disputes. Models are not just individuals showcasing clothing; they are integral to the creation and dissemination of artistic and commercial content. Intellectual property rights, including image rights and the use of a model's likeness, are critical aspects of the legal landscape. This section will explore the protection of models' image rights, the use of their likeness in advertising, and the potential for unauthorized use. Models, like any other professionals, are entitled to fair and ethical treatment. This section will address issues such as working conditions, hours, and the responsibility of agencies and designers to prioritize the well-being of models. Additionally, it will explore the global movement toward inclusivity, diversity, and the promotion of positive body image within the industry. The fashion industry has faced scrutiny for perpetuating harmful standards of beauty and fostering a culture of exploitation. This section will discuss the ethical responsibilities of all stakeholders, including the promotion of diversity, the prevention of exploitation, and the role of models as influencers for positive change. In conclusion, the legal considerations in fashion modeling are multifaceted, requiring a comprehensive approach to protect the rights of models and ensure ethical practices within the industry. By understanding and addressing these legal aspects, the fashion industry can create a more transparent, fair, and inclusive environment for all stakeholders involved in the art of modeling.

Keywords: fashion modeling contracts, image rights in modeling, labor rights for models, ethical practices in fashion, diversity and inclusivity in modeling

Procedia PDF Downloads 46
360 Preliminary Evaluation of Echinacea Species by UV-VIS Spectroscopy Fingerprinting of Phenolic Compounds

Authors: Elena Ionescu, Elena Iacob, Marie-Louise Ionescu, Carmen Elena Tebrencu, Oana Teodora Ciuperca

Abstract:

Echinacea species (Asteraceae) has received a global attention because it is widely used for treatment of cold, flu and upper respiratory tract infections. Echinacea species contain a great variety of chemical components that contribute to their activity. The most important components responsible for the biological activity are those with high molecular-weight such as polysaccharides, polyacetylenes, highly unsaturated alkamides and caffeic acid derivatives. The principal factors that may influence the chemical composition of Echinacea include the species and the part of plant used (aerial parts or roots ). In recent years the market for Echinacea has grown rapidly and also the cases of adultery/replacement especially for Echinacea root. The identification of presence or absence of same biomarkers provide information for safe use of Echinacea species in food supplements industry. The aim of the study was the preliminary evaluation and fingerprinting by UV-VISIBLE spectroscopy of biomarkers in terms of content in phenolic derivatives of some Echinacea species (E. purpurea, E. angustifolia and E. pallida) for identification and authentication of the species. The steps of the study were: (1) samples (extracts) preparation from Echinacea species (non-hydrolyzed and hydrolyzed ethanol extracts); (2) samples preparation of reference substances (polyphenol acids: caftaric acid, caffeic acid, chlorogenic acid, ferulic acid; flavonoids: rutoside, hyperoside, isoquercitrin and their aglycones: quercitri, quercetol, luteolin, kaempferol and apigenin); (3) identification of specific absorption at wavelengths between 700-200 nm; (4) identify the phenolic compounds from Echinacea species based on spectral characteristics and the specific absorption; each class of compounds corresponds to a maximum absorption in the UV spectrum. The phytochemical compounds were identified at specific wavelengths between 700-200 nm. The absorption intensities were measured. The obtained results proved that ethanolic extract showed absorption peaks attributed to: phenolic compounds (free phenolic acids and phenolic acids derivatives) registrated between 220-280 nm, unsymmetrical chemical structure compounds (caffeic acid, chlorogenic acid, ferulic acid) with maximum absorption peak and absorption "shoulder" that may be due to substitution of hydroxyl or methoxy group, flavonoid compounds (in free form or glycosides) between 330-360 nm, due to the double bond in position 2,3 and carbonyl group in position 4 flavonols. UV spectra showed two major peaks of absorption (quercetin glycoside, rutin, etc.). The results obtained by UV-VIS spectroscopy has revealed the presence of phenolic derivatives such as cicoric acid (240 nm), caftaric acid (329 nm), caffeic acid (240 nm), rutoside (205 nm), quercetin (255 nm), luteolin (235 nm) in all three species of Echinacea. The echinacoside is absent. This profile mentioned above and the absence of phenolic compound echinacoside leads to the conclusion that species harvested as Echinacea angustifolia and Echinacea pallida are Echinacea purpurea also; It can be said that preliminary fingerprinting of Echinacea species through correspondence with the phenolic derivatives profile can be achieved by UV-VIS spectroscopic investigation, which is an adequate technique for preliminary identification and authentication of Echinacea in medicinal herbs.

Keywords: Echinacea species, Fingerprinting, Phenolic compounds, UV-VIS spectroscopy

Procedia PDF Downloads 232
359 Electret: A Solution of Partial Discharge in High Voltage Applications

Authors: Farhina Haque, Chanyeop Park

Abstract:

The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.

Keywords: electrets, high power density, partial discharge, triode corona discharge

Procedia PDF Downloads 184
358 Upon Poly(2-Hydroxyethyl Methacrylate-Co-3, 9-Divinyl-2, 4, 8, 10-Tetraoxaspiro (5.5) Undecane) as Polymer Matrix Ensuring Intramolecular Strategies for Further Coupling Applications

Authors: Aurica P. Chiriac, Vera Balan, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Elena Stoleru, Loredana E. Nita, Iordana Neamtu, Alina Diaconu, Liliana Mititelu-Tartau

Abstract:

The interest for studying ‘smart’ materials is entirely justified and in this context were realized investigations on poly(2-hydroxyethylmethacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane), which is a macromolecular compound with sensibility at pH and temperature, gel formation capacity, binding properties, amphilicity, good oxidative and thermal stability. Physico-chemical characteristics in terms of the molecular weight, temperature-sensitive abilities and thermal stability, as well rheological, dielectric and spectroscopic properties were evaluated in correlation with further coupling capabilities. Differential scanning calorimetry investigation indicated Tg at 36.6 °C and a melting point at Tm=72.8°C, for the studied copolymer, and up to 200oC two exothermic processes (at 99.7°C and 148.8°C) were registered with losing weight of about 4 %, respective 19.27%, which indicate just processes of thermal decomposition (and not phenomena of thermal transition) owing to scission of the functional groups and breakage of the macromolecular chains. At the same time, the rheological studies (rotational tests) confirmed the non-Newtonian shear-thinning fluid behavior of the copolymer solution. The dielectric properties of the copolymer have been evaluated in order to investigate the relaxation processes and two relaxation processes under Tg value were registered and attributed to localized motions of polar groups from side chain macromolecules, or parts of them, without disturbing the main chains. According to literature and confirmed as well by our investigations, β-relaxation is assigned with the rotation of the ester side group and the γ-relaxation corresponds to the rotation of hydroxy- methyl side groups. The fluorescence spectroscopy confirmed the copolymer structure, the spiroacetal moiety getting an axial conformation, more stable, with lower energy, able for specific interactions with molecules from environment, phenomena underlined by different shapes of the emission spectra of the copolymer. Also, the copolymer was used as template for indomethacin incorporation as model drug, and the biocompatible character of the complex was confirmed. The release behavior of the bioactive compound was dependent by the copolymer matrix composition, the increasing of 3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane comonomer amount attenuating the drug release. At the same time, the in vivo studies did not show significant differences of leucocyte formula elements, GOT, GPT and LDH levels, nor immune parameters (OC, PC, and BC) between control mice group and groups treated just with copolymer samples, with or without drug, data attesting the biocompatibility of the polymer samples. The investigation of the physico-chemical characteristics of poly(2-hydrxyethyl methacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane) in terms of temperature-sensitive abilities, rheological and dielectrical properties, are bringing useful information for further specific use of this polymeric compound.

Keywords: bioapplications, dielectric and spectroscopic properties, dual sensitivity at pH and temperature, smart materials

Procedia PDF Downloads 266
357 Knowledge Management in the Tourism Industry in Project Management Paradigm

Authors: Olga A. Burukina

Abstract:

Tourism is a complex socio-economic phenomenon, partly regulated by national tourism industries. The sustainable development of tourism in a region, country or in tourist destination depends on a number of factors (political, economic, social, cultural, legal and technological), the understanding and correct interpretation of which is invariably anthropocentric. It is logical that for the successful functioning of a tour operating company, it is necessary to ensure its sustainable development. Sustainable tourism is defined as tourism that fully considers its current and future economic, social and environmental impacts, taking into account the needs of the industry, the environment and the host communities. For the business enterprise, sustainable development is defined as adopting business strategies and activities that meet the needs of the enterprise and its stakeholders today while protecting, sustaining and enhancing the human and natural resources that will be needed in the future. In addition to a systemic approach to the analysis of tourist destinations, each tourism project can and should be considered as a system characterized by a very high degree of variability, since each particular case of its implementation differs from the previous and subsequent ones, sometimes in a cardinal way. At the same time, it is important to understand that this variability is predominantly of anthropogenic nature (except for force majeure situations that are considered separately and afterwards). Knowledge management is the process of creating, sharing, using and managing the knowledge and information of an organization. It refers to a multidisciplinary approach to achieve organisational objectives by making the best use of knowledge. Knowledge management is seen as a key systems component that allows obtaining, storing, transferring, and maintaining information and knowledge in particular, in a long-term perspective. The study aims, firstly, to identify (1) the dynamic changes in the Italian travel industry in the last 5 years before the COVID19 pandemic, which can be considered the scope of force majeure circumstances, (2) the impact of the pandemic on the industry and (3) efforts required to restore it, and secondly, how project management tools can help to improve knowledge management in tour operating companies to maintain their sustainability, diminish potential risks and restore their pre-pandemic performance level as soon as possible. The pilot research is based upon a systems approach and has employed a pilot survey, semi-structured interviews, prior research analysis (aka literature review), comparative analysis, cross-case analysis, and modelling. The results obtained are very encouraging: PM tools can improve knowledge management in tour operating companies and secure the more sustainable development of the Italian tourism industry based on proper knowledge management and risk management.

Keywords: knowledge management, project management, sustainable development, tourism industr

Procedia PDF Downloads 140
356 Developing an Online Application for Mental Skills Training and Development

Authors: Arjun Goutham, Chaitanya Sridhar, Sunita Maheshwari, Robin Uthappa, Prasanna Gopinath

Abstract:

In alignment with the growth in the sporting industry, a number of people playing and competing in sports are growing exponentially across the globe. However, the number of sports psychology experts are not growing at a similar rate, especially in the Asian and more so, Indian context. Hence, the access to actionable mental training solutions specific to individual athletes is limited. Also, the time constraint an athlete faces due to their intense training schedule makes one-on-one sessions difficult. One of the means to bridge that gap is through technology. Technology makes individualization possible. It allows for easy access to specific-qualitative content/information and provides a medium to place individualized assessments, analysis, solutions directly into an athlete's hands. This enables mental training awareness, education, and real-time actionable solutions possible for athletes in-spite of the limitation of available sports psychology experts in their region. Furthermore, many athletes are hesitant to seek support due to the stigma of appearing weak. Such individuals would prefer a more discreet way. Athletes who have strong mental performance tend to produce better results. The mobile application helps to equip athletes with assessing and developing their mental strategies directed towards improving performance on an ongoing basis. When an athlete understands their strengths and limitations in their mental application, they can focus specifically on applying the strategies that work and improve on zones of limitation. With reports, coaches get to understand the unique inner workings of an athlete and can utilize the data & analysis to coach them with better precision and use coaching styles & communication that suits better. Systematically capturing data and supporting athletes(with individual-specific solutions) or teams with assessment, planning, instructional content, actionable tools & strategies, reviewing mental performance and the achievement of objectives & goals facilitate for a consistent mental skills development at all levels of sporting stages of an athlete's career. The mobile application will help athletes recognize and align with their stable attributes such as their personalities, learning & execution modalities, challenges & requirements of their sport, etc and help develop dynamic attributes like states, beliefs, motivation levels, focus etc. with practice and training. It will provide measurable analysis on a regular basis and help them stay aligned to their objectives & goals. The solutions are based on researched areas of influence on sporting performance individually or in teams.

Keywords: athletes, mental training, mobile application, performance, sports

Procedia PDF Downloads 246
355 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 92
354 Damage Tolerance of Composites Containing Hybrid, Carbon-Innegra, Fibre Reinforcements

Authors: Armin Solemanifar, Arthur Wilkinson, Kinjalkumar Patel

Abstract:

Carbon fibre (CF) - polymer laminate composites have very low densities (approximately 40% lower than aluminium), high strength and high stiffness but in terms of toughness properties they often require modifications. For example, adding rubbers or thermoplastics toughening agents are common ways of improving the interlaminar fracture toughness of initially brittle thermoset composite matrices. The main aim of this project was to toughen CF-epoxy resin laminate composites using hybrid CF-fabrics incorporating Innegra™ a commercial highly-oriented polypropylene (PP) fibre, in which more than 90% of its crystal orientation is parallel to the fibre axis. In this study, the damage tolerance of hybrid (carbon-Innegra, CI) composites was investigated. Laminate composites were produced by resin-infusion using: pure CF fabric; fabrics with different ratios of commingled CI, and two different types of pure Innegra fabrics (Innegra 1 and Innegra 2). Dynamic mechanical thermal analysis (DMTA) was used to measure the glass transition temperature (Tg) of the composite matrix and values of flexural storage modulus versus temperature. Mechanical testing included drop-weight impact, compression-after-impact (CAI), and interlaminar (short-beam) shear strength (ILSS). Ultrasonic C-Scan imaging was used to determine the impact damage area and scanning electron microscopy (SEM) to observe the fracture mechanisms that occur during failure of the composites. For all composites, 8 layers of fabrics were used with a quasi-isotropic sequence of [-45°, 0°, +45°, 90°]s. DMTA showed the Tg of all composites to be approximately same (123 ±3°C) and that flexural storage modulus (before the onset of Tg) was the highest for the pure CF composite while the lowest were for the Innegra 1 and 2 composites. Short-beam shear strength of the commingled composites was higher than other composites, while for Innegra 1 and 2 composites only inelastic deformation failure was observed during the short-beam test. During impact, the Innegra 1 composite withstood up to 40 J without any perforation while for the CF perforation occurred at 10 J. The rate of reduction in compression strength upon increasing the impact energy was lowest for the Innegra 1 and 2 composites, while CF showed the highest rate. On the other hand, the compressive strength of the CF composite was highest of all the composites at all impacted energy levels. The predominant failure modes for Innegra composites observed in cross-sections of fractured specimens were fibre pull-out, micro-buckling, and fibre plastic deformation; while fibre breakage and matrix delamination were a major failure observed in the commingled composites due to the more brittle behaviour of CF. Thus, Innegra fibres toughened the CF composites but only at the expense of reducing compressive strength.

Keywords: hybrid composite, thermoplastic fibre, compression strength, damage tolerance

Procedia PDF Downloads 275