Search results for: laplace–fourier transformation
245 Using Digital Innovations to Increase Awareness and Intent to Use Depo-Medroxy Progesterone Acetate-Subcutaneous Contraception among Women of Reproductive Age in Nigeria, Uganda, and Malawi
Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu
Abstract:
Introduction: Digital innovations have been useful in supporting a client’s contraceptive user journey from awareness to method initiation. The concept of contraceptive self-care is being promoted globally as a means for achieving universal access to quality contraceptive care; however, information about this approach is limited. An important determinant of the scale of awareness is the message construct, choice of information channel, and an understanding of the socio-epidemiological dynamics within the target audience. Significant gains have been made recently in expanding the awareness base of DMPA-SC -a relatively new entrant into the family planning method mix. The cornerstone of this success is a multichannel promotion campaign themed Discover your Power (DYP). The DYP campaign combines content marketing across select social media platforms, chatbots, Cyber-IPC, Interactive Voice Response (IVR), and radio campaigns. Methodology: During implementation, the project monitored predefined metrics of awareness and intent, such as the number of persons reached with the messages, the number of impressions, and meaningful engagement (link-clicks). Metrics/indicators are extracted through native insight/analytics tools across the various platforms. The project also enlists community mobilizers (CMs) who go door-to-door and engage WRA to advertise DISC’s online presence and support them to engage with IVR, digital companion (chatbot), Facebook page, and DiscoverYourPower website. Results: The result showed that the digital platforms recorded 242 million impressions and reached 82 million users with key DMPA-SC self-injection messaging in the three countries. As many as 3.4 million persons engaged (liked, clicked, shared, or reposted) digital posts -an indication of intention. Conclusion: Digital solutions and innovations are gradually becoming the archetype for the advancement of the self-care agenda. Digital innovations can also be used to increase awareness and normalize contraceptive self-care behavior amongst women of reproductive age if they are made an integral part of reproductive health programming.Keywords: digital transformation, health systems, DMPA-SC, family planning, self-care
Procedia PDF Downloads 81244 Transformative Measures in Chemical and Petrochemical Industry Through Agile Principles and Industry 4.0 Technologies
Authors: Bahman Ghorashi
Abstract:
The immense awareness of the global climate change has compelled traditional fossil fuel companies to develop strategies to reduce their carbon footprint and simultaneously consider the production of various sources of clean energy in order to mitigate the environmental impact of their operations. Similarly, supply chain issues, the scarcity of certain raw materials, energy costs as well as market needs, and changing consumer expectations have forced the traditional chemical industry to reexamine their time-honored modes of operation. This study examines how such transformative change might occur through the applications of agile principles as well as industry 4.0 technologies. Clearly, such a transformation is complex, costly, and requires a total commitment on the part of the top leadership and the entire management structure. Factors that need to be considered include organizational speed of change, a restructuring that would lend itself toward collaboration and the selling of solutions to customers’ problems, rather than just products, integrating ‘along’ as well as ‘across’ value chains, mastering change and uncertainty as well as a recognition of the importance of concept-to-cash time, i.e., the velocity of introducing new products to market, and the leveraging of people and information. At the same time, parallel to implementing such major shifts in the ethos, and the fabric of the organization, the change leaders should remain mindful of the companies’ DNA while incorporating the necessary DNA defying shifts. Furthermore, such strategic maneuvers should inevitably incorporate the managing of the upstream and downstream operations, harnessing future opportunities, preparing and training the workforce, implementing faster decision making and quick adaptation to change, managing accelerated response times, as well as forming autonomous and cross-functional teams. Moreover, the leaders should establish the balance between high-value solutions versus high-margin products, fully implement digitization of operations and, when appropriate, incorporate the latest relevant technologies, such as: AI, IIoT, ML, and immersive technologies. This study presents a summary of the agile principles and the relevant technologies and draws lessons from some of the best practices that are already implemented within the chemical industry in order to establish a roadmap to agility. Finally, the critical role of educational institutions in preparing the future workforce for Industry 4.0 is addressed.Keywords: agile principles, immersive technologies, industry 4.0, workforce preparation
Procedia PDF Downloads 106243 Revealing Thermal Degradation Characteristics of Distinctive Oligo-and Polisaccharides of Prebiotic Relevance
Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár
Abstract:
As natural prebiotic (non-digestible) carbohydrates stimulate the growth of colon microflora and contribute to maintain the health of the host, analytical studies aiming at revealing the chemical behavior of these beneficial food components came to the forefront of interest. Food processing (especially baking) may lead to a significant conversion of the parent compounds, hence it is of utmost importance to characterize the transformation patterns and the plausible decomposition products formed by thermal degradation. The relevance of this work is confirmed by the wide-spread use of these carbohydrates (fructo-oligosaccharides, cyclodextrins, raffinose and resistant starch) in the food industry. More and more functional foodstuffs are being developed based on prebiotics as bioactive components. 12 different types of oligosaccharides have been investigated in order to reveal their thermal degradation characteristics. Different carbohydrate derivatives (D-fructose and D-glucose oligomers and polymers) have been exposed to elevated temperatures (150 °C 170 °C, 190 °C, 210 °C, and 220 °C) for 10 min. An advanced HPLC method was developed and used to identify the decomposition products of carbohydrates formed as a consequence of thermal treatment. Gradient elution was applied with binary solvent elution (acetonitrile, water) through amine based carbohydrate column. Evaporative light scattering (ELS) proved to be suitable for the reliable detection of the UV/VIS inactive carbohydrate degradation products. These experimental conditions and applied advanced techniques made it possible to survey all the formed intermediers. Change in oligomer distribution was established in cases of all studied prebiotics throughout the thermal treatments. The obtained results indicate increased extent of chain degradation of the carbohydrate moiety at elevated temperatures. Prevalence of oligomers with shorter chain length and even the formation of monomer sugars (D-glucose and D-fructose) might be observed at higher temperatures. Unique oligomer distributions, which have not been described previously are revealed in the case of each studied, specific carbohydrate, which might result in various prebiotic activities. Resistant starches exhibited high stability when being thermal treated. The degradation process has been modeled by a plausible reaction mechanism, in which proton catalyzed degradation and chain cleavage take place.Keywords: prebiotics, thermal degradation, fructo-oligosaccharide, HPLC, ELS detection
Procedia PDF Downloads 305242 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application
Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz
Abstract:
Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation
Procedia PDF Downloads 103241 Systemic Functional Linguistics in the Rhetorical Strategies of Persuasion: A Longitudinal Study of Transitivity and Ergativity in the Rhetoric of Saras’ Sustainability Reports
Authors: Antonio Piga
Abstract:
This study explores the correlation between Systemic Functional Linguistics (SFL) and Critical Discourse Analysis (CDA) as tools for analysing the evolution of rhetoric in the communicative strategies adopted in a company’s Reports on social and environmental responsibility. In more specific terms, transitivity and ergativity- concepts from Systemic Functional Linguistics (SFL) - through the lenses of CDA, are employed as a theoretical means for the analysis of a longitudinal study in the communicative strategies employed by Saras SpA pre- and during the Covid-19 pandemic crisis. Saras is an Italian joint-stock company operating in oil refining and power generation. The qualitative and quantitative linguistic analysis carried out through the use of Sketch Engine software aims to identify and explain how rhetoric - and ideology - is constructed and presented through language use in Saras SpA Sustainability Reports. Specific focus is given to communication strategies to local and global communities and stakeholders in the years immediately before and during the Covid-19 pandemic. The rationale behind the study lies in the fact that 2020 and 2021 have been among the most difficult years since the end of World War II. Lives were abruptly turned upside down by the pandemic, which had grave negative effects on people’s health and on the economy. The result has been a threefold crisis involving health, the economy and social tension, with the refining sector being one of the hardest hit, since the oil refining industry was one of the most affected industries due to the general reduction in mobility and oil consumption brought about by the virus-fighting measures. Emphasis is placed on the construction of rhetorical strategies pre- and during the pandemic crisis using the representational process of transitivity and ergativity (SFL), thus revealing the close relationship between the use language in terms of Social Actors and semantic roles of syntactic transformation on the one hand, and ideological assumptions on the other. The results show that linguistic decisions regarding transitivity and ergativity choices play a crucial role in how effective writing achieves its rhetorical objectives in terms of spreading and maintaining dominant and implicit ideologies and underlying persuasive actions, and that some ideological motivation is perpetuated – if not actually overtly or subtly strengthened - in social-environmental Reports issued in the midst of the Covid-19 pandemic crisis.Keywords: systemic functional linguistics, sustainability, critical discourse analysis, transitivity, ergativity
Procedia PDF Downloads 107240 Effect of Enzymatic Hydrolysis and Ultrasounds Pretreatments on Biogas Production from Corn Cob
Authors: N. Pérez-Rodríguez, D. García-Bernet, A. Torrado-Agrasar, J. M. Cruz, A. B. Moldes, J. M. Domínguez
Abstract:
World economy is based on non-renewable, fossil fuels such as petroleum and natural gas, which entails its rapid depletion and environmental problems. In EU countries, the objective is that at least 20% of the total energy supplies in 2020 should be derived from renewable resources. Biogas, a product of anaerobic degradation of organic substrates, represents an attractive green alternative for meeting partial energy needs. Nowadays, trend to circular economy model involves efficiently use of residues by its transformation from waste to a new resource. In this sense, characteristics of agricultural residues (that are available in plenty, renewable, as well as eco-friendly) propitiate their valorisation as substrates for biogas production. Corn cob is a by-product obtained from maize processing representing 18 % of total maize mass. Corn cob importance lies in the high production of this cereal (more than 1 x 109 tons in 2014). Due to its lignocellulosic nature, corn cob contains three main polymers: cellulose, hemicellulose and lignin. Crystalline, highly ordered structures of cellulose and lignin hinders microbial attack and subsequent biogas production. For the optimal lignocellulose utilization and to enhance gas production in anaerobic digestion, materials are usually submitted to different pretreatment technologies. In the present work, enzymatic hydrolysis, ultrasounds and combination of both technologies were assayed as pretreatments of corn cob for biogas production. Enzymatic hydrolysis pretreatment was started by adding 0.044 U of Ultraflo® L feruloyl esterase per gram of dry corncob. Hydrolyses were carried out in 50 mM sodium-phosphate buffer pH 6.0 with a solid:liquid proportion of 1:10 (w/v), at 150 rpm, 40 ºC and darkness for 3 hours. Ultrasounds pretreatment was performed subjecting corn cob, in 50 mM sodium-phosphate buffer pH 6.0 with a solid: liquid proportion of 1:10 (w/v), at a power of 750W for 1 minute. In order to observe the effect of the combination of both pretreatments, some samples were initially sonicated and then they were enzymatically hydrolysed. In terms of methane production, anaerobic digestion of the corn cob pretreated by enzymatic hydrolysis was positive achieving 290 L CH4 kg MV-1 (compared with 267 L CH4 kg MV-1 obtained with untreated corn cob). Although the use of ultrasound as the only pretreatment resulted detrimentally (since gas production decreased to 244 L CH4 kg MV-1 after 44 days of anaerobic digestion), its combination with enzymatic hydrolysis was beneficial, reaching the highest value (300.9 L CH4 kg MV-1). Consequently, the combination of both pretreatments improved biogas production from corn cob.Keywords: biogas, corn cob, enzymatic hydrolysis, ultrasound
Procedia PDF Downloads 267239 From Vegetarian to Cannibal: A Literary Analysis of a Journey of Innocence in ‘Life of Pi’
Authors: Visvaganthie Moodley
Abstract:
Language use and aesthetic appreciation are integral to meaning-making in prose, as they are in poetry. However, in comparison to poetic analysis, a literary analysis of prose that focuses on linguistics and stylistics is somewhat scarce as it generally requires the study of lengthy texts. Nevertheless, the effect of linguistic and stylistic features in prose as conscious design by authors for creating specific effects and conveying preconceived messages is drawing increasing attention of linguists and literary experts. A close examination of language use in prose can, among a host of literary purposes, convey emotive and cognitive values and contribute to making interpretations about how fictional characters are represented to the imaginative reader. This paper provides a literary analysis of Yann Martel’s narrative of a 14-year-old Indian boy, Pi, who had survived the wreck of a Japanese cargo ship, by focusing on his 227-day journey of tribulations, along with a Bengal tiger, on a lifeboat. The study favours a pluralistic approach blending literary criticism, linguistic analysis and stylistic description. It adopts Leech and Short’s (2007) broad framework of linguistic and stylistic categories (lexical categories, grammatical categories, figures of speech etc. [sic] and context and cohesion) as well as a range of other relevant linguistic phenomena to show how the narrator, Pi, and the author influence the reader’s interpretations of Pi’s character. Such interpretations are made using the lens of Freud’s psychoanalytical theory (which focuses on the interplay of the instinctual id, the ego and the moralistic superego) and Blake’s philosophy of innocence and experience (the two contrary states of the human soul). The paper traces Pi’s transformation from animal-loving, God-fearing vegetarian to brutal animal slayer and cannibal in his journey of survival. By a close examination of the linguistic and stylistic features of the narrative, it argues that, despite evidence of butchery and cannibalism, Pi’s gruesome behaviour is motivated by extreme physiological and psychological duress and not intentional malice. Finally, the paper concludes that the voice of the narrator, Pi, and that of the author, Martel, act as powerful persuasive agents in influencing the reader to respond with a sincere flow of sympathy for Pi and judge him as having retained his innocence in his instinctual need for survival.Keywords: foregrounding, innocence and experience, lexis, literary analysis, psychoanalytical lens, style
Procedia PDF Downloads 169238 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density
Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita
Abstract:
Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite
Procedia PDF Downloads 104237 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy
Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani
Abstract:
Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.Keywords: QCL, automation, microplastics, tissues, infrared, speed
Procedia PDF Downloads 66236 Research on Innovation Service based on Science and Technology Resources in Beijing-Tianjin-Hebei
Authors: Runlian Miao, Wei Xie, Hong Zhang
Abstract:
In China, Beijing-Tianjin-Hebei is regarded as a strategically important region because itenjoys highest development in economic development, opening up, innovative capacity and andpopulation. Integrated development of Beijing-Tianjin-Hebei region is increasingly emphasized by the government recently years. In 2014, it has ascended to one of the national great development strategies by Chinese central government. In 2015, Coordinated Development Planning Compendium for Beijing-Tianjin-Hebei Region was approved. Such decisions signify Beijing-Tianjin-Hebei region would lead innovation-driven economic development in China. As an essential factor to achieve national innovation-driven development and significant part of regional industry chain, the optimization of science and technology resources allocation will exert great influence to regional economic transformation and upgrading and innovation-driven development. However, unbalanced distribution, poor sharing of resources and existence of information isolated islands have contributed to different interior innovation capability, vitality and efficiency, which impeded innovation and growth of the whole region. Under such a background, to integrate and vitalize regional science and technology resources and then establish high-end, fast-responding and precise innovation service system basing on regional resources, would be of great significance for integrated development of Beijing-Tianjin-Hebei region and even handling of unbalanced and insufficient development problem in China. This research uses the method of literature review and field investigation and applies related theories prevailing home and abroad, centering service path of science and technology resources for innovation. Based on the status quo and problems of regional development of Beijing-Tianjin-Hebei, theoretically, the author proposed to combine regional economics and new economic geography to explore solution to problem of low resource allocation efficiency. Further, the author puts forward to applying digital map into resource management and building a platform for information co-building and sharing. At last, the author presents the thought to establish a specific service mode of ‘science and technology plus digital map plus intelligence research plus platform service’ and suggestion on co-building and sharing mechanism of 3 (Beijing, Tianjin and Hebei ) plus 11 (important cities in Hebei Province).Keywords: Beijing-Tianjin-Hebei, science and technology resources, innovation service, digital platform
Procedia PDF Downloads 161235 Preparation and Characterization of Anti-Acne Dermal Products Based on Erythromycin β-Cyclodextrin Lactide Complex
Authors: Lacramioara Ochiuz, Manuela Hortolomei, Aurelia Vasile, Iulian Stoleriu, Marcel Popa, Cristian Peptu
Abstract:
Local antibiotherapy is one of the most effective acne therapies. Erythromycin (ER) is a macrolide antibiotic topically administered for over 30 years in the form of gel, ointment or hydroalcoholic solution for the acne therapy. The use of ER as a base for topical dosage forms raises some technological challenges due to the physicochemical properties of this substance. The main disadvantage of ER is the poor water solubility (2 mg/mL) that limits both formulation using hydrophilic bases and skin permeability. Cyclodextrins (CDs) are biocompatible cyclic oligomers of glucose, with hydrophobic core and hydrophilic exterior. CDs are used to improve the bioavailability of drugs by increasing their solubility and/or their rate of dissolution after including the poorly water soluble substances (such as ER) in the hydrophobic cavity of CDs. Adding CDs leads to the increase of solubility and improved stability of the drug substance, increased permeability of substances of low water solubility, decreased toxicity and even to active dose reduction as a result of increased bioavailability. CDs increase skin tolerability by reducing the irritant effect of certain substances. We have included ER to lactide modified β-cyclodextrin, in order to improve the therapeutic effect of topically administered ER. The aims of the present study were to synthesise and describe a new complex with prolonged release of ER with lactide modified β-cyclodextrin (CD-LA_E), to investigate the CD-LA_E complex by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FTIR), to analyse the effect of semisolid base on the in vitro and ex vivo release characteristics of ER in the CD-LA_E complex by assessing the permeability coefficient and the release kinetics by fitting on mathematical models. SEM showed that, by complexation, ER changes its crystal structure and enters the amorphous phase. FTIR analysis has shown that certain specific bands of some groups in the ER structure move during the incapsulation process. The structure of the CD-LA_E complex has a molar ratio of 2.12 to 1 between lactide modified β-cyclodextrin and ER. The three semisolid bases (2% Carbopol, 13% Lutrol 127 and organogel based on Lutrol and isopropyl myristate) show a good capacity for incorporating the CD-LA_E complex, having a content of active ingredient ranging from 98.3% to 101.5% as compared to the declared value of 2% ER. The results of the in vitro dissolution test showed that the ER solubility was significantly increased by CDs incapsulation. The amount of ER released from the CD-LA_E gels was in the range of 76.23% to 89.01%, whereas gels based on ER released a maximum percentage of 26.01% ER. The ex vivo dissolution test confirms the increased ER solubility achieved by complexation, and supports the assumption that the use of this process might increase ER permeability. The highest permeability coefficient was obtained in ER released from gel based on 2% Carbopol: in vitro 33.33 μg/cm2/h, and ex vivo 26.82 μg/cm2/h, respectively. The release kinetics of complexed ER is performed by Fickian diffusion, according to the results obtained by fitting the data in the Korsmeyer-Peppas model.Keywords: erythromycin, acne, lactide, cyclodextrin
Procedia PDF Downloads 266234 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 111233 Characterization of a Lipolytic Enzyme of Pseudomonas nitroreducens Isolated from Mealworm's Gut
Authors: Jung-En Kuan, Whei-Fen Wu
Abstract:
In this study, a symbiotic bacteria from yellow mealworm's (Tenebrio molitor) mid-gut was isolated with characteristics of growth on minimal-tributyrin medium. After a PCR-amplification of its 16s rDNA, the resultant nucleotide sequences were then analyzed by schemes of the phylogeny trees. Accordingly, it was designated as Pseudomonas nitroreducens D-01. Next, by searching the lipolytic enzymes in its protein data bank, one of those potential lipolytic α/β hydrolases was identified, again using PCR-amplification and nucleotide-sequencing methods. To construct an expression of this lipolytic gene in plasmids, the target-gene primers were then designed, carrying the C-terminal his-tag sequences. Using the vector pET21a, a recombinant lipolytic hydrolase D gene with his-tag nucleotides was successfully cloned into it, of which the lipolytic D gene is under a control of the T7 promoter. After transformation of the resultant plasmids into Eescherichia coli BL21 (DE3), an IPTG inducer was used for the induction of the recombinant proteins. The protein products were then purified by metal-ion affinity column, and the purified proteins were found capable of forming a clear zone on tributyrin agar plate. Shortly, its enzyme activities were determined by degradation of p-nitrophenyl ester(s), and the substantial yellow end-product, p-nitrophenol, was measured at O.D.405 nm. Specifically, this lipolytic enzyme efficiently targets p-nitrophenyl butyrate. As well, it shows the most reactive activities at 40°C, pH 8 in potassium phosphate buffer. In thermal stability assays, the activities of this enzyme dramatically drop when the temperature is above 50°C. In metal ion assays, MgCl₂ and NH₄Cl induce the enzyme activities while MnSO₄, NiSO₄, CaCl₂, ZnSO₄, CoCl₂, CuSO₄, FeSO₄, and FeCl₃ reduce its activities. Besides, NaCl has no effects on its enzyme activities. Most organic solvents decrease the activities of this enzyme, such as hexane, methanol, ethanol, acetone, isopropanol, chloroform, and ethyl acetate. However, its enzyme activities increase when DMSO exists. All the surfactants like Triton X-100, Tween 80, Tween 20, and Brij35 decrease its lipolytic activities. Using Lineweaver-Burk double reciprocal methods, the function of the enzyme kinetics were determined such as Km = 0.488 (mM), Vmax = 0.0644 (mM/min), and kcat = 3.01x10³ (s⁻¹), as well the total efficiency of kcat/Km is 6.17 x10³ (mM⁻¹/s⁻¹). Afterwards, based on the phylogenetic analyses, this lipolytic protein is classified to type IV lipase by its homologous conserved region in this lipase family.Keywords: enzyme, esterase, lipotic hydrolase, type IV
Procedia PDF Downloads 133232 (Mis) Communication across the Borders: Politics, Media, and Public Opinion in Turkey
Authors: Banu Baybars Hawks
Abstract:
To date, academic attention in social sciences remains inadequate with regard to research and analysis of public opinion in Turkey. Most of the existing research has assessed the public opinion during political election periods. Therefore, it is of great interest to find out what the public thinks about current issues in Turkey, and how to interpret the results to be able to reveal whether they may have any reflections on social, political, and cultural structure of the country. Accordingly, the current study seeks to fill the gap in the social sciences literature in English regarding Turkey’s social and political stand which may be perceived to be very different by other nations. Without timely feedback from public surveys, various programs for improving different services and institutions functioning in the country might not achieve their expected goal, nor can decisions about which programs to implement be made rationally. Additionally, the information gathered may not only yield important insights into public’s opinion regarding current agenda in Turkey, but also into the correlates shaping public policies. Agenda-setting studies including agenda-building, agenda melding, reversed agenda-setting and information diffusion studies will be used to explain the roles of factors and actors in the formation of public opinion in Turkey. Knowing the importance of public agenda in the agenda setting and building process, this paper aims to reveal the social and political tendencies of the Turkish public. For that purpose, a survey will be carried out in December of 2014 to determine the social and political trends in Turkey for that same year. The subjects for the study, which utilize a questionairre in one-on-one interviews, will include 1,000 individuals aged 18 years and older from 26 cities representing general population. A stratified random sampling frame will be used. The topics covered by the survey include: The most important current problem in Turkey; the Economy; Terror; Approaches to the Kurdish Issue; Evaluations of the Government and Opposition Parties; Evaluations of Institutional Efficiency; Foreign Policy; the Judicial System/Constitution; Democracy and the Media; and, Social Relations/Life in Turkey. Since the beginning of the 21st century, Turkey has been undergoing a rapid transformation. The reflections of the changes can be seen in all areas from economics to politics. It is my hope that findings of this study may shed light on the important aspects of institutions, variables setting the agenda, and formation process of public opinion in Turkey.Keywords: public opinion, media, agenda setting, information diffusion, government, freedom, Turkey
Procedia PDF Downloads 467231 Gene Expression and Staining Agents: Exploring the Factors That Influence the Electrophoretic Properties of Fluorescent Proteins
Authors: Elif Tugce Aksun Tumerkan, Chris Lowe, Hannah Krupa
Abstract:
Fluorescent proteins are self-sufficient in forming chromophores with a visible wavelength from 3 amino acids sequence within their own polypeptide structure. This chromophore – a molecule that absorbs a photon of light and exhibits an energy transition equal to the energy of the absorbed photon. Fluorescent proteins (FPs) consisted of a chain of 238 amino acid residues and composed of 11 beta strands shaped in a cylinder surrounding an alpha helix structure. A better understanding of the system of the chromospheres and the increasing advance in protein engineering in recent years, the properties of FPs offers the potential for new applications. They have used sensors and probes in molecular biology and cell-based research that giving a chance to observe these FPs tagged cell localization, structural variation and movement. For clarifying functional uses of fluorescent proteins, electrophoretic properties of these proteins are one of the most important parameters. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis is used for determining electrophoretic properties commonly. While there are many techniques are used for determining the functionality of protein-based research, SDS-PAGE analysis can only provide a molecular level assessment of the proteolytic fragments. Before SDS-PAGE analysis, fluorescent proteins need to successfully purified. Due to directly purification of the target, FPs is difficult from the animal, gene expression is commonly used which must be done by transformation with the plasmid. Furthermore, used gel within electrophoresis and staining agents properties have a key role. In this review, the different factors that have the impact on the electrophoretic properties of fluorescent proteins explored. Fluorescent protein separation and purification are the essential steps before electrophoresis that should be done very carefully. For protein purification, gene expression process and following steps have a significant function. For successful gene expression, the properties of selected bacteria for expression, used plasmid are essential. Each bacteria has own characteristics which are very sensitive to gene expression, also used procedure is the important factor for fluorescent protein expression. Another important factors are gel formula and used staining agents. Gel formula has an effect on the specific proteins mobilization and staining with correct agents is a key step for visualization of electrophoretic bands of protein. Visuality of proteins can be changed depending on staining reagents. Apparently, this review has emphasized that gene expression and purification have a stronger effect than electrophoresis protocol and staining agents.Keywords: cell biology, gene expression, staining agents, SDS-page
Procedia PDF Downloads 194230 “Japan’s New Security Outlook: Implications for the US-Japan Alliance”
Authors: Agustin Maciel-Padilla
Abstract:
This paper explores the most significant change to Japan’s security strategy since the end of World War II, in particular Prime Minister Fumio Kishida’s government publication, in late 2022, of 3 policy documents (the National Security Strategy [NSS], the National Defense Strategy and the Defense Buildup Program) that basically propose to expand the country’s military capabilities and to increase military spending over a 5-year period. These policies represent a remarkable transformation of Japan’s defense-oriented policy followed since 1946. These proposals have been under analysis and debate since they were announced, as it was also Japan’s historic ambition to strengthening its deterrence capabilities in the context of a more complex regional security environment. Even though this new defense posture has attracted significant international attention, it is far from representing a done deal because of the fact that there is still a long way to go to implement this vision because of a wide variety of political and economic issues. Japan is currently experiencing the most dangerous security environment since the end of World War II, and this situation led Japan to intensify its dialogue with the United States to reflect a re-evaluation of deterrence in the face of a rapidly worsening security environment, a changing balance of power in East Asia, and the arrival of a new era of “great power competition”. Japan’s new documents, for instance, identify China and North Korea’s as posing, respectively, a strategic challenge and an imminent threat. Japan has also noted that Russia’s invasion of Ukraine has contributed to erode the foundation of the international order. It is considered that Russia’s aggression was possible because Ukraine’s defense capability was not enough for effective deterrence. Moreover, Japan’s call for “counterstrike capabilities” results from a recognition that China and North Korea’s ballistic and cruise missiles could overwhelm Japan’s air and missile defense systems, and therefore there is an urgent need to strengthen deterrence and resilience. In this context, this paper will focus on the impact of these changes on the US-Japan alliance. Adapting this alliance to Tokyo’s new ambitions and capabilities could be critical in terms of updating their traditional protection/access to bases arrangement, interoperability and joint command and control issues, as well as regarding the security–economy nexus. While China is Japan’s largest trading partner, and trade between the two has been growing, US-Japan economic relationship has been slower, notwithstanding the fact that US-Japan security cooperation has strengthened significantly in recent years.Keywords: us-japan alliance, japan security, great power competition, interoperability
Procedia PDF Downloads 65229 The Phenomenology in the Music of Debussy through Inspiration of Western and Oriental Culture
Authors: Yu-Shun Elisa Pong
Abstract:
Music aesthetics related to phenomenology is rarely discussed and still in the ascendant while multi-dimensional discourses of philosophy were emerged to be an important trend in the 20th century. In the present study, a basic theory of phenomenology from Edmund Husserl (1859-1938) is revealed and discussed followed by the introduction of intentionality concepts, eidetic reduction, horizon, world, and inter-subjectivity issues. Further, phenomenology of music and general art was brought to attention by the introduction of Roman Ingarden’s The Work of Music and the Problems of its Identity (1933) and Mikel Dufrenne’s The Phenomenology of Aesthetic Experience (1953). Finally, Debussy’s music will be analyzed and discussed from the perspective of phenomenology. Phenomenology is not so much a methodology or analytics rather than a common belief. That is, as much as possible to describe in detail the different human experience, relative to the object of purpose. Such idea has been practiced in various guises for centuries, only till the early 20th century Phenomenology was better refined through the works of Husserl, Heidegger, Sartre, Merleau-Ponty and others. Debussy was born in an age when the Western society began to accept the multi-cultural baptism. With his unusual sensitivity to the oriental culture, Debussy has presented considerable inspiration, absorption, and echo in his music works. In fact, his relationship with nature is far from echoing the idea of Chinese ancient literati and nature. Although he is not the first composer to associate music with human and nature, the unique quality and impact of his works enable him to become a significant figure in music aesthetics. Debussy’s music tried to develop a quality analogous of nature, and more importantly, based on vivid life experience and artistic transformation to achieve the realm of pure art. Such idea that life experience comes before artwork, either clear or vague, simple or complex, was later presented abstractly in his late works is still an interesting subject worth further discussion. Debussy’s music has existed for more than or close to a century. It has received musicology researcher’s attention as much as other important works in the history of Western music. Among the pluralistic discussion about Debussy’s art and ideas, phenomenological aesthetics has enlightened new ideas and view angles to relook his great works and even gave some previous arguments legitimacy. Overall, this article provides a new insight of Debussy’s music from phenomenological exploration and it is believed phenomenology would be an important pathway in the research of the music aesthetics.Keywords: Debussy's music, music esthetics, oriental culture, phenomenology
Procedia PDF Downloads 274228 Effects of Evening vs. Morning Training on Motor Skill Consolidation in Morning-Oriented Elderly
Authors: Maria Korman, Carmit Gal, Ella Gabitov, Avi Karni
Abstract:
The main question addressed in this study was whether the time-of-day wherein training is afforded is a significant factor for motor skill ('how-to', procedural knowledge) acquisition and consolidation into long term memory in the healthy elderly population. Twenty-nine older adults (60-75 years) practiced an explicitly instructed 5-element key-press sequence by repeatedly generating the sequence ‘as fast and accurately as possible’. Contribution of three parameters to acquisition, 24h post-training consolidation, and 1-week retention gains in motor sequence speed was assessed: (a) time of training (morning vs. evening group) (b) sleep quality (actigraphy) and (c) chronotype. All study participants were moderately morning type, according to the Morningness-Eveningness Questionnaire score. All participants had sleep patterns typical of age, with average sleep efficiency of ~ 82%, and approximately 6 hours of sleep. Speed of motor sequence performance in both groups improved to a similar extent during training session. Nevertheless, evening group expressed small but significant overnight consolidation phase gains, while morning group showed only maintenance of performance level attained at the end of training. By 1-week retention test, both groups showed similar performance levels with no significant gains or losses with respect to 24h test. Changes in the tapping patterns at 24h and 1-week post-training were assessed based on normalized Pearson correlation coefficients using the Fisher’s z-transformation in reference to the tapping pattern attained at the end of the training. Significant differences between the groups were found: the evening group showed larger changes in tapping patterns across the consolidation and retention windows. Our results show that morning-oriented older adults effectively acquired, consolidated, and maintained a new sequence of finger movements, following both morning and evening practice sessions. However, time-of-training affected the time-course of skill evolution in terms of performance speed, as well as the re-organization of tapping patterns during the consolidation period. These results are in line with the notion that motor training preceding a sleep interval may be beneficial for the long-term memory in the elderly. Evening training should be considered an appropriate time window for motor skill learning in older adults, even in individuals with morning chronotype.Keywords: time-of-day, elderly, motor learning, memory consolidation, chronotype
Procedia PDF Downloads 134227 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 143226 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception
Authors: Păcurar Diana Istina
Abstract:
The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception
Procedia PDF Downloads 89225 A Design Research Methodology for Light and Stretchable Electrical Thermal Warm-Up Sportswear to Enhance the Performance of Athletes against Harsh Environment
Authors: Chenxiao Yang, Li Li
Abstract:
In this decade, the sportswear market rapidly expanded while numerous sports brands are conducting fierce competitions to hold their market shares and trying to act as a leader in professional competition sports areas to set the trends. Thus, various advancing sports equipment is being deeply explored to improving athletes’ performance in fierce competitions. Although there is plenty protective equipment such as cuff, running legging, etc., on the market, there is still blank in the field of sportswear during prerace warm-up this important time gap, especially for those competitions host in cold environment. Because there is always time gaps between warm-up and race due to event logistics or unexpected weather factors. Athletes will be exposed to chilly condition for an unpredictable long period of time. As a consequence, the effects of warm-up will be negated, and the competition performance will be degraded. However, reviewing the current market, there is none effective sports equipment provided to help athletes against this harsh environment or the rare existing products are so blocky or heavy to restrict the actions. An ideal thermal-protective sportswear should be light, flexible, comfort and aesthetic at the same time. Therefore, this design research adopted the textile circular knitting methodology to integrate soft silver-coated conductive yarns (ab. SCCYs), elastic nylon yarn and polyester yarn to develop the proposed electrical, thermal sportswear, with the strengths aforementioned. Meanwhile, the relationship between heating performance, stretch load, and energy consumption were investigated. Further, a simulation model was established to ensure providing sufficient warm and flexibility at lower energy cost and with an optimized production, parameter determined. The proposed circular knitting technology and simulation model can be directly applied to instruct prototype developments to cater different target consumers’ needs and ensure prototypes’’ safety. On the other hand, high R&D investment and time consumption can be saved. Further, two prototypes: a kneecap and an elbow guard, were developed to facilitate the transformation of research technology into an industrial application and to give a hint on the blur future blueprint.Keywords: cold environment, silver-coated conductive yarn, electrical thermal textile, stretchable
Procedia PDF Downloads 269224 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa
Authors: Victoria L. Nkosi, Parvaneh Farhangpour
Abstract:
Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students. Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province. Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.Keywords: inner context, outer context, over-aged students, vicious cycle
Procedia PDF Downloads 201223 Examination of the South African Fire Legislative Framework
Authors: Mokgadi Julia Ngoepe-Ntsoane
Abstract:
The article aims to make a case for a legislative framework for the fire sector in South Africa. Robust legislative framework is essential for empowering those with obligatory mandate within the sector. This article contributes to the body of knowledge in the field of policy reviews particularly with regards to the legal framework. It has been observed overtime that the scholarly contributions in this field are limited. Document analysis was the methodology selected for the investigation of the various legal frameworks existing in the country. It has been established that indeed the national legislation on the fire industry does not exist in South Africa. From the documents analysed, it was revealed that the sector is dominated by cartels who are exploiting the new entrants to the market particularly SMEs. It is evident that these cartels are monopolising the system as they have long been operating in the system turning it into self- owned entities. Commitment to addressing the challenges faced by fire services and creating a framework for the evolving role that fire brigade services are expected to execute in building safer and sustainable communities is vital. Legislation for the fire sector ought to be concluded with immediate effect. The outdated national fire legislation has necessitated the monopolisation and manipulation of the system by dominating organisations which cause a painful discrimination and exploitation of smaller service providers to enter the market for trading in that occupation. The barrier to entry bears long term negative effects on national priority areas such as employment creation, poverty, and others. This monopolisation and marginalisation practices by cartels in the sector calls for urgent attention by government because if left attended, it will leave a lot of people particularly women and youth being disadvantaged and frustrated. The downcast syndrome exercised within the fire sector has wreaked havoc and is devastating. This is caused by cartels that have been within the sector for some time, who know the strengths and weaknesses of processes, shortcuts, advantages and consequences of various actions. These people take advantage of new entrants to the sector who in turn find it difficult to manoeuvre, find the market dissonant and end up giving up their good ideas and intentions. There are many pieces of legislation which are industry specific such as housing, forestry, agriculture, health, security, environmental which are used to regulate systems within the institutions involved. Other regulations exist as bi-laws for guiding the management within the municipalities.Keywords: sustainable job creation, growth and development, transformation, risk management
Procedia PDF Downloads 173222 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study
Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji
Abstract:
Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.Keywords: phase change materials, air-PCM exchangers, convection, conduction
Procedia PDF Downloads 177221 Chongqing, a Megalopolis Disconnected with Its Rivers: An Assessment of Urban-Waterside Disconnect in a Chinese Megacity and Proposed Improvement Strategies, Chongqing City as a Case Study
Authors: Jaime E. Salazar Lagos
Abstract:
Chongqing is located in southwest China and is becoming one of the most significant cities in the world. Its urban territories and metropolitan-related areas have one of the largest urban populations in China and are partitioned and shaped by two of the biggest and longest rivers on Earth, the Yangtze and Jialing Rivers, making Chongqing a megalopolis intersected by rivers. Historically, Chongqing City enjoyed fundamental connections with its rivers; however, current urban development of Chongqing City has lost effective integration of the riverbanks within the urban space and structural dynamics of the city. Therefore, there exists a critical lack of physical and urban space conjoined with the rivers, which diminishes the economic, tourist, and environmental development of Chongqing. Using multi-scale satellite-map site verification the study confirmed the hypothesis and urban-waterside disconnect. Collected data demonstrated that the Chongqing urban zone, an area of 5292 square-kilometers and a water front of 203.4 kilometers, has only 23.49 kilometers of extension (just 11.5%) with high-quality physical and spatial urban-waterside connection. Compared with other metropolises around the world, this figure represents a significant lack of spatial development along the rivers, an issue that has not been successfully addressed in the last 10 years of urban development. On a macro scale, the study categorized the different kinds of relationships between the city and its riverbanks. This data was then utilized in the creation of an urban-waterfront relationship map that can be a tool for future city planning decisions and real estate development. On a micro scale, we discovered there are three primary elements that are causing the urban-waterside disconnect: extensive highways along the most dense areas and city center, large private real estate developments that do not provide adequate riverside access, and large industrial complexes that almost completely lack riverside utilization. Finally, as part of the suggested strategies, the study concludes that the most efficient and practical way to improve this situation is to follow the historic master-planning of Chongqing and create connective nodes in critical urban locations along the river, a strategy that has been used for centuries to handle the same urban-waterside relationship. Reviewing and implementing this strategy will allow the city to better connect with the rivers, reducing the various impacts of disconnect and urban transformation.Keywords: Chongqing City, megalopolis, nodes, riverbanks disconnection, urban
Procedia PDF Downloads 225220 Geographical Information System and Multi-Criteria Based Approach to Locate Suitable Sites for Industries to Minimize Agriculture Land Use Changes in Bangladesh
Authors: Nazia Muhsin, Tofael Ahamed, Ryozo Noguchi, Tomohiro Takigawa
Abstract:
One of the most challenging issues to achieve sustainable development on food security is land use changes. The crisis of lands for agricultural production mainly arises from the unplanned transformation of agricultural lands to infrastructure development i.e. urbanization and industrialization. Land use without sustainability assessment could have impact on the food security and environmental protections. Bangladesh, as the densely populated country with limited arable lands is now facing challenges to meet sustainable food security. Agricultural lands are using for economic growth by establishing industries. The industries are spreading from urban areas to the suburban areas and using the agricultural lands. To minimize the agricultural land losses for unplanned industrialization, compact economic zones should be find out in a scientific approach. Therefore, the purpose of the study was to find out suitable sites for industrial growth by land suitability analysis (LSA) by using Geographical Information System (GIS) and multi-criteria analysis (MCA). The goal of the study was to emphases both agricultural lands and industries for sustainable development in land use. The study also attempted to analysis the agricultural land use changes in a suburban area by statistical data of agricultural lands and primary data of the existing industries of the study place. The criteria were selected as proximity to major roads, and proximity to local roads, distant to rivers, waterbodies, settlements, flood-flow zones, agricultural lands for the LSA. The spatial dataset for the criteria were collected from the respective departments of Bangladesh. In addition, the elevation spatial dataset were used from the SRTM (Shuttle Radar Topography Mission) data source. The criteria were further analyzed with factors and constraints in ArcGIS®. Expert’s opinion were applied for weighting the criteria according to the analytical hierarchy process (AHP), a multi-criteria technique. The decision rule was set by using ‘weighted overlay’ tool to aggregate the factors and constraints with the weights of the criteria. The LSA found only 5% of land was most suitable for industrial sites and few compact lands for industrial zones. The developed LSA are expected to help policy makers of land use and urban developers to ensure the sustainability of land uses and agricultural production.Keywords: AHP (analytical hierarchy process), GIS (geographic information system), LSA (land suitability analysis), MCA (multi-criteria analysis)
Procedia PDF Downloads 263219 Game On: Unlocking the Educational Potential of Games and Entertainment in Online Learning
Authors: Colleen Cleveland, W. Adam Baldowski
Abstract:
In the dynamic realm of online education, the integration of games and entertainment has emerged as a powerful strategy to captivate learners, drive active participation, and cultivate meaningful learning experiences. This abstract presents an overview of the upcoming conference, "Game On," dedicated to exploring the transformative impact of gamification, interactive simulations, and multimedia content in the digital learning landscape. Introduction: The conference aims to blur the traditional boundaries between education and entertainment, inspiring learners of diverse ages and backgrounds to actively engage in their online learning journeys. By leveraging the captivating elements of games and entertainment, educators can enhance motivation, retention, and deep understanding among virtual classroom participants. Conference Highlights: Commencing with an exploration of theoretical foundations drawing from educational psychology, instructional design, and the latest pedagogical research, participants will gain valuable insights into the ways gamified elements elevate the quality of online education. Attendees can expect interactive sessions, workshops, and case studies showcasing best practices and innovative strategies, including game-based assessments and virtual reality simulations. Inclusivity and Diversity: The conference places a strong emphasis on inclusivity, accessibility, and diversity in the integration of games and entertainment for educational purposes. Discussions will revolve around accommodating diverse learning styles, overcoming potential challenges, and ensuring equitable access to engaging educational content for all learners. Educational Transformation: Educators, instructional designers, and e-learning professionals attending "Game On" will acquire practical techniques to elevate the quality of their online courses. The conference promises a stimulating and informative exploration of blending education with entertainment, unlocking the untapped potential of games and entertainment in online education. Conclusion: "Game On" invites participants to embark on a journey that transforms online education by harnessing the power of entertainment. This event promises to be a cornerstone in the evolution of virtual learning, offering valuable insights for those seeking to create a more engaging and effective online educational experience. Join us as we explore new horizons, pushing the boundaries of online education through the fusion of games and entertainment.Keywords: online education, games, entertainment, psychology, therapy, pop culture
Procedia PDF Downloads 50218 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context
Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue
Abstract:
The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.Keywords: data analytics, smart cities, competitive advantage, internet of things
Procedia PDF Downloads 133217 Spatial and Temporal Analysis of Forest Cover Change with Special Reference to Anthropogenic Activities in Kullu Valley, North-Western Indian Himalayan Region
Authors: Krisala Joshi, Sayanta Ghosh, Renu Lata, Jagdish C. Kuniyal
Abstract:
Throughout the world, monitoring and estimating the changing pattern of forests across diverse landscapes through remote sensing is instrumental in understanding the interactions of human activities and the ecological environment with the changing climate. Forest change detection using satellite imageries has emerged as an important means to gather information on a regional scale. Kullu valley in Himachal Pradesh, India is situated in a transitional zone between the lesser and the greater Himalayas. Thus, it presents a typical rugged mountainous terrain with moderate to high altitude which varies from 1200 meters to over 6000 meters. Due to changes in agricultural cropping patterns, urbanization, industrialization, hydropower generation, climate change, tourism, and anthropogenic forest fire, it has undergone a tremendous transformation in forest cover in the past three decades. The loss and degradation of forest cover results in soil erosion, loss of biodiversity including damage to wildlife habitats, and degradation of watershed areas, and deterioration of the overall quality of nature and life. The supervised classification of LANDSAT satellite data was performed to assess the changes in forest cover in Kullu valley over the years 2000 to 2020. Normalized Burn Ratio (NBR) was calculated to discriminate between burned and unburned areas of the forest. Our study reveals that in Kullu valley, the increasing number of forest fire incidents specifically, those due to anthropogenic activities has been on a rise, each subsequent year. The main objective of the present study is, therefore, to estimate the change in the forest cover of Kullu valley and to address the various social aspects responsible for the anthropogenic forest fires. Also, to assess its impact on the significant changes in the regional climatic factors, specifically, temperature, humidity, and precipitation over three decades, with the help of satellite imageries and ground data. The main outcome of the paper, we believe, will be helpful for the administration for making a quantitative assessment of the forest cover area changes due to anthropogenic activities and devising long-term measures for creating awareness among the local people of the area.Keywords: Anthropogenic Activities, Forest Change Detection, Normalized Burn Ratio (NBR), Supervised Classification
Procedia PDF Downloads 172216 Practice of Social Innovation in School Education: A Study of Third Sector Organisations in India
Authors: Prakash Chittoor
Abstract:
In the recent past, it is realised especially in third sector that employing social innovation is crucial for achieving viable and long lasting social transformation. In this context, education is one among many sectors that have opened up itself for such move where employing social innovation emerges as key for reaching out to the excluded sections who are often failed to get support from either policy or market interventions. In fact, education is being as a crucial factor for social development is well understood at both academic and policy level. In order to move forward to achieve better results, interventions from multiple sectors may be required as its reach cultivates capabilities and skill of the deprived in order to ensure both market and social participation in the long run. Despite state’s intervention, it is found that still millions of children are out of school due to lack of political will, lapses in policy implementation and neoliberal intervention of marketization. As a result, universalisation of elementary education became as an elusive goal to poor and marginalised sections where state obtain constant pressure by corporate sector to withdraw from education sector that led convince in providing quality education. At this juncture, the role of third sector organizations plays is quite remarkable. Especially, it has evolved as a key player in education sector to reach out to the poor and marginalised in the far-flung areas. These organisations work in resources constrain environment, yet, in order to achieve larger social impact they adopt various social innovations from time to time to reach out to the unreached. Their attempts not only limited to just approaching the unreached children but to retain them for long-time in the schooling system in order to ripe the results for their families and communities. There is a need to highlight various innovative ways adopted and practiced by the third sector organisations in India to achieve the elusive goal of universal access of primary education with quality. With this background, the paper primarily attempts to present an in-depth understanding about innovative practices employed by third sectors organisations like Isha Vidya through government schools adoption programme in India where it engages itself with government and build capabilities among the government teachers to promote state run schooling with quality and better infrastructure. Further, this paper assess whether such innovative attempts succeeded in to achieving universal quality education in the areas where it operates and draws implications for State policy.Keywords: school education, third sector organisations, social innovation, market domination
Procedia PDF Downloads 262