Search results for: interest free banking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7302

Search results for: interest free banking

102 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 116
101 Introducing Transport Engineering through Blended Learning Initiatives

Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi

Abstract:

Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.

Keywords: blended learning, highway design, teaching, transport planning

Procedia PDF Downloads 139
100 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach

Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira

Abstract:

Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.

Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers

Procedia PDF Downloads 140
99 Production, Characterisation, and in vitro Degradation and Biocompatibility of a Solvent-Free Polylactic-Acid/Hydroxyapatite Composite for 3D-Printed Maxillofacial Bone-Regeneration Implants

Authors: Carlos Amnael Orozco-Diaz, Robert David Moorehead, Gwendolen Reilly, Fiona Gilchrist, Cheryl Ann Miller

Abstract:

The current gold-standard for maxillofacial reconstruction surgery (MRS) utilizes auto-grafted cancellous bone as a filler. This study was aimed towards developing a polylactic-acid/hydroxyapatite (PLA-HA) composite suitable for fused-deposition 3D printing. Functionalization of the polymer through the addition of HA was directed to promoting bone-regeneration properties so that the material can rival the performance of cancellous bone grafts in terms of bone-lesion repair. This kind of composite enables the production of MRS implants based off 3D-reconstructions from image studies – namely computed tomography – for anatomically-correct fitting. The present study encompassed in-vitro degradation and in-vitro biocompatibility profiling for 3D-printed PLA and PLA-HA composites. PLA filament (Verbatim Co.) and Captal S hydroxyapatite micro-scale HA powder (Plasma Biotal Ltd) were used to produce PLA-HA composites at 5, 10, and 20%-by-weight HA concentration. These were extruded into 3D-printing filament, and processed in a BFB-3000 3D-Printer (3D Systems Co.) into tensile specimens, and were mechanically challenged as per ASTM D638-03. Furthermore, tensile specimens were subjected to accelerated degradation in phosphate-buffered saline solution at 70°C for 23 days, as per ISO-10993-13-2010. This included monitoring of mass loss (through dry-weighing), crystallinity (through thermogravimetric analysis/differential thermal analysis), molecular weight (through gel-permeation chromatography), and tensile strength. In-vitro biocompatibility analysis included cell-viability and extracellular matrix deposition, which were performed both on flat surfaces and on 3D-constructs – both produced through 3D-printing. Discs of 1 cm in diameter and cubic 3D-meshes of 1 cm3 were 3D printed in PLA and PLA-HA composites (n = 6). The samples were seeded with 5000 MG-63 osteosarcoma-like cells, with cell viability extrapolated throughout 21 days via resazurin reduction assays. As evidence of osteogenicity, collagen and calcium deposition were indirectly estimated through Sirius Red staining and Alizarin Red staining respectively. Results have shown that 3D printed PLA loses structural integrity as early as the first day of accelerated degradation, which was significantly faster than the literature suggests. This was reflected in the loss of tensile strength down to untestable brittleness. During degradation, mass loss, molecular weight, and crystallinity behaved similarly to results found in similar studies for PLA. All composite versions and pure PLA were found to perform equivalent to tissue-culture plastic (TCP) in supporting the seeded-cell population. Significant differences (p = 0.05) were found on collagen deposition for higher HA concentrations, with composite samples performing better than pure PLA and TCP. Additionally, per-cell-calcium deposition on the 3D-meshes was significantly lower when comparing 3D-meshes to discs of the same material (p = 0.05). These results support the idea that 3D-printable PLA-HA composites are a viable resorbable material for artificial grafts for bone-regeneration. Degradation data suggests that 3D-printing of these materials – as opposed to other manufacturing methods – might result in faster resorption than currently-used PLA implants.

Keywords: bone regeneration implants, 3D-printing, in vitro testing, biocompatibility, polymer degradation, polymer-ceramic composites

Procedia PDF Downloads 147
98 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys

Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz

Abstract:

There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.

Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys

Procedia PDF Downloads 231
97 CLOUD Japan: Prospective Multi-Hospital Study to Determine the Population-Based Incidence of Hospitalized Clostridium difficile Infections

Authors: Kazuhiro Tateda, Elisa Gonzalez, Shuhei Ito, Kirstin Heinrich, Kevin Sweetland, Pingping Zhang, Catia Ferreira, Michael Pride, Jennifer Moisi, Sharon Gray, Bennett Lee, Fred Angulo

Abstract:

Clostridium difficile (C. difficile) is the most common cause of antibiotic-associated diarrhea and infectious diarrhea in healthcare settings. Japan has an aging population; the elderly are at increased risk of hospitalization, antibiotic use, and C. difficile infection (CDI). Little is known about the population-based incidence and disease burden of CDI in Japan although limited hospital-based studies have reported a lower incidence than the United States. To understand CDI disease burden in Japan, CLOUD (Clostridium difficile Infection Burden of Disease in Adults in Japan) was developed. CLOUD will derive population-based incidence estimates of the number of CDI cases per 100,000 population per year in Ota-ku (population 723,341), one of the districts in Tokyo, Japan. CLOUD will include approximately 14 of the 28 Ota-ku hospitals including Toho University Hospital, which is a 1,000 bed tertiary care teaching hospital. During the 12-month patient enrollment period, which is scheduled to begin in November 2018, Ota-ku residents > 50 years of age who are hospitalized at a participating hospital with diarrhea ( > 3 unformed stools (Bristol Stool Chart 5-7) in 24 hours) will be actively ascertained, consented, and enrolled by study surveillance staff. A stool specimen will be collected from enrolled patients and tested at a local reference laboratory (LSI Medience, Tokyo) using QUIK CHEK COMPLETE® (Abbott Laboratories). which simultaneously tests specimens for the presence of glutamate dehydrogenase (GDH) and C. difficile toxins A and B. A frozen stool specimen will also be sent to the Pfizer Laboratory (Pearl River, United States) for analysis using a two-step diagnostic testing algorithm that is based on detection of C. difficile strains/spores harboring toxin B gene by PCR followed by detection of free toxins (A and B) using a proprietary cell cytotoxicity neutralization assay (CCNA) developed by Pfizer. Positive specimens will be anaerobically cultured, and C. difficile isolates will be characterized by ribotyping and whole genomic sequencing. CDI patients enrolled in CLOUD will be contacted weekly for 90 days following diarrhea onset to describe clinical outcomes including recurrence, reinfection, and mortality, and patient reported economic, clinical and humanistic outcomes (e.g., health-related quality of life, worsening of comorbidities, and patient and caregiver work absenteeism). Studies will also be undertaken to fully characterize the catchment area to enable population-based estimates. The 12-month active ascertainment of CDI cases among hospitalized Ota-ku residents with diarrhea in CLOUD, and the characterization of the Ota-ku catchment area, including estimation of the proportion of all hospitalizations of Ota-ku residents that occur in the CLOUD-participating hospitals, will yield CDI population-based incidence estimates, which can be stratified by age groups, risk groups, and source (hospital-acquired or community-acquired). These incidence estimates will be extrapolated, following age standardization using national census data, to yield CDI disease burden estimates for Japan. CLOUD also serves as a model for studies in other countries that can use the CLOUD protocol to estimate CDI disease burden.

Keywords: Clostridium difficile, disease burden, epidemiology, study protocol

Procedia PDF Downloads 247
96 Comparative Characteristics of Bacteriocins from Endemic Lactic Acid Bacteria

Authors: K. Karapetyan, F. Tkhruni, A. Aghajanyan, T. S. Balabekyan, L. Arstamyan

Abstract:

Introduction: Globalization of the food supply has created the conditions favorable for the emergence and spread of food-borne and especially dangerous pathogens (EDP) in developing countries. The fresh-cut fruit and vegetable industry is searching for alternatives to replace chemical treatments with biopreservative approaches that ensure the safety of the processed foods product. Antimicrobial compounds of lactic acid bacteria (LAB) possess bactericidal or bacteriostatic activity against intestinal pathogens, spoilage organisms and food-borne pathogens such as Listeria monocytogenes, Staphylococcus aureus and Salmonella. Endemic strains of LAB were isolated. The strains, showing broad spectrum of antimicrobial activity against food spoiling microorganisms, were selected. The genotyping by 16S rRNA sequencing, GS-PCR, RAPD PCR methods showed that they were presented by Lactobacillus rhamnosus109, L.plantarum 65, L.plantarum 66 and Enterococcus faecium 64 species. LAB are deposited in "Microbial Depository Center" (MDC) SPC "Armbiotechnology". Methods: LAB strains were isolated from different dairy products from rural households from the highland regions of Armenia. Serially diluted samples were spread on MRS (Merck, Germany) and hydrolyzed milk agar (1,2 % w/v). Single colonies from each LAB were individually inoculated in liquid MRS medium and incubated at 37oC for 24 hours. Culture broth with biomass was centrifuged at 10,000 g during 20 min for obtaining of cell free culture broth (CFC). The antimicrobial substances from CFC broth were purified by the combination of adsorption-desorption and ion-exchange chromatography methods. Separation of bacteriocins was performed using a HPLC method on "Avex ODS" C18 column. Mass analysis of peptides recorded on the device API 4000 in the electron ionization mode. The spot-on-lawn method on the test culture plated in the solid medium was applied. The antimicrobial activity is expressed in arbitrary units (AU/ml). Results. Purification of CFC broth of LAB allowed to obtain partially purified antimicrobial preparations which contains bacteriocins with broad spectrum of antimicrobial activity. Investigation of their main biochemical properties shown, that inhibitory activity of preparations is partially reduced after treatment with proteinase K, trypsin, pepsin, suggesting a proteinaceous nature of bacteriocin-like substances containing in CFC broth. Preparations preserved their activity after heat treatment (50-121 oC, 20 min) and were stable in the pH range 3–8. The results of SDS PAAG electrophoresis show that L.plantarum 66 and Ent.faecium 64 strains have one bacteriocin (BCN) with maximal antimicrobial activity with approximate molecular weight 2.0-3.0 kDa. From L.rhamnosus 109 two BCNs were obtained. Mass spectral analysis indicates that these bacteriocins have peptide bonds and molecular weight of BCN 1 and BCN 2 are approximately 1.5 kDa and 700 Da. Discussion: Thus, our experimental data shown, that isolated endemic strains of LAB are able to produce bacteriocins with high and different inhibitory activity against broad spectrum of microorganisms of different taxonomic group, such as Salmonella sp., Esherichia coli, Bacillus sp., L.monocytogenes, Proteus mirabilis, Staph. aureus, Ps. aeruginosa. Obtained results proved the perspectives for use of endemic strains in the preservation of foodstuffs. Acknowledgments: This work was realized with financial support of the Project Global Initiatives for Preliferation Prevention (GIPP) T2- 298, ISTC A-1866.

Keywords: antimicrobial activity, bacteriocins, endemic strains, food safety

Procedia PDF Downloads 546
95 Classical Improvisation Facilitating Enhanced Performer-Audience Engagement and a Mutually Developing Impulse Exchange with Concert Audiences

Authors: Pauliina Haustein

Abstract:

Improvisation was part of Western classical concert culture and performers’ skill sets until early 20th century. Historical accounts, as well as recent studies, indicate that improvisatory elements in the programme may contribute specifically towards the audiences’ experience of enhanced emotional engagement during the concert. This paper presents findings from the author’s artistic practice research, which explored re-introducing improvisation to Western classical performance practice as a musician (cellist and ensemble partner/leader). In an investigation of four concert cycles, the performer-researcher sought to gain solo and chamber music improvisation techniques (both related to and independent of repertoire), conduct ensemble improvisation rehearsals, design concerts with an improvisatory approach, and reflect on interactions with audiences after each concert. Data was collected through use of reflective diary, video recordings, measurement of sound parameters, questionnaires, a focus group, and interviews. The performer’s empirical experiences and findings from audience research components were juxtaposed and interrogated to better understand the (1) rehearsal and planning processes that enable improvisatory elements to return to Western classical concert experience and (2) the emotional experience and type of engagement that occur throughout the concert experience for both performer and audience members. This informed the development of a concert model, in which a programme of solo and chamber music repertoire and improvisations were combined according to historically evidenced performance practice (including free formal solo and ensemble improvisations based on audience suggestions). Inspired by historical concert culture, where elements of risk-taking, spontaneity, and audience involvement (such as proposing themes for fantasies) were customary, this concert model invited musicians to contribute to the process personally and creatively at all stages, from programme planning, and throughout the live concert. The type of democratic, personal, creative, and empathetic collaboration that emerged, as a result, appears unique in Western classical contexts, rather finding resonance in jazz ensemble, drama, or interdisciplinary settings. The research identified features of ensemble improvisation, such as empathy, emergence, mutual engagement, and collaborative creativity, that became mirrored in audience’s responses, generating higher levels of emotional engagement, empathy, inclusivity, and a participatory, co-creative experience. It appears that duringimprovisatory moments in the concert programme, audience members started feeling more like active participants in za\\a creative, collaborative exchange and became stakeholders in a deeper phenomenon of meaning-making and narrativization. Examining interactions between all involved during the concert revealed that performer-audience impulse exchange occurred on multiple levels of awareness and seemed to build upon each other, resulting in particularly strong experiences of both performer and audience’s engagement. This impact appeared especially meaningful for audience members who were seldom concertgoers and reported little familiarity with classical music. The study found that re-introducing improvisatory elements to Western classical concert programmes has strong potential in increasing audience’s emotional engagement with the musical performance, enabling audience members to connect more personally with the individual performers, and in reaching new-to-classical-music audiences.

Keywords: artistic research, audience engagement, audience experience, classical improvisation, ensemble improvisation, emotional engagement, improvisation, improvisatory approach, musical performance, practice research

Procedia PDF Downloads 119
94 'Sextually' Active: Teens, 'Sexting' and Gendered Double Standards in the Digital Age

Authors: Annalise Weckesser, Alex Wade, Clara Joergensen, Jerome Turner

Abstract:

Introduction: Digital mobile technologies afford Generation M a number of opportunities in terms of communication, creativity and connectivity in their social interactions. Yet these young people’s use of such technologies is often the source of moral panic with accordant social anxiety especially prevalent in media representations of teen ‘sexting,’ or the sending of sexually explicit images via smartphones. Thus far, most responses to youth sexting have largely been ineffective or unjust with adult authorities sometimes blaming victims of non-consensual sexting, using child pornography laws to paradoxically criminalise those they are designed to protect, and/or advising teenagers to simply abstain from the practice. Prevention strategies are further skewed, with sex education initiatives often targeted at girls, implying that they shoulder the responsibility of minimising the risks associated with sexting (e.g. revenge porn and sexual predation). Purpose of Study: Despite increasing public interest and concern about ‘teen sexting,’ there remains a dearth of research with young people regarding their experiences of navigating sex and relationships in the current digital media landscape. Furthermore, young people's views on sexting are rarely solicited in the policy and educational strategies aimed at them. To address this research-policy-education gap, an interdisciplinary team of four researchers (from anthropology, media, sociology and education) have undertaken a peer-to-peer research project to co-create a sexual health intervention. Methods: In the winter of 2015-2016, the research team conducted serial group interviews with four cohorts of students (aged 13 to 15) from a secondary school in the West Midlands, UK. To facilitate open dialogue, girls and boys were interviewed separately, and each group consisted of no more than four pupils. The team employed a range of participatory techniques to elicit young people’s views on sexting, its consequences, and its interventions. A final focus group session was conducted with all 14 male and female participants to explore developing a peer-to-peer ‘safe sexting’ education intervention. Findings: This presentation will highlight the ongoing, ‘old school’ sexual double standards at work within this new digital frontier. In the sharing of ‘nudes’ (teens’ preferred term to ‘sexting’) via social media apps (e.g. Snapchat and WhatsApp), girls felt sharing images was inherently risky and feared being blamed and ‘slut-shamed.’ In contrast, boys were seen to gain in social status if they accumulated nudes of female peers. Further, if boys had nudes of themselves shared without consent, they felt they were expected to simply ‘tough it out.’ The presentation will also explore what forms of supports teens desire to help them in their day-to-day navigation of these digitally mediated, heteronormative performances of teen femininity and masculinity expected of them. Conclusion: This is the first research project, within UK, conducted with rather than about teens and the phenomenon of sexting. It marks a timely and important contribution to the nascent, but growing body of knowledge on gender, sexual politics and the digital mobility of sexual images created by and circulated amongst young people.

Keywords: teens, sexting, gender, sexual politics

Procedia PDF Downloads 217
93 Screening and Improved Production of an Extracellular β-Fructofuranosidase from Bacillus Sp

Authors: Lynette Lincoln, Sunil S. More

Abstract:

With the rising demand of sugar used today, it is proposed that world sugar is expected to escalate up to 203 million tonnes by 2021. Hydrolysis of sucrose (table sugar) into glucose and fructose equimolar mixture is catalyzed by β-D-fructofuranoside fructohydrolase (EC 3.2.1.26), commonly called as invertase. For fluid filled center in chocolates, preparation of artificial honey, as a sweetener and especially to ensure that food stuffs remain fresh, moist and soft for longer spans invertase is applied widely and is extensively being used. From an industrial perspective, properties such as increased solubility, osmotic pressure and prevention of crystallization of sugar in food products are highly desired. Screening for invertase does not involve plate assay/qualitative test to determine the enzyme production. In this study, we use a three-step screening strategy for identification of a novel bacterial isolate from soil which is positive for invertase production. The primary step was serial dilution of soil collected from sugarcane fields (black soil, Maddur region of Mandya district, Karnataka, India) was grown on a Czapek-Dox medium (pH 5.0) containing sucrose as the sole C-source. Only colonies with the capability to utilize/breakdown sucrose exhibited growth. Bacterial isolates released invertase in order to take up sucrose, splitting the disaccharide into simple sugars. Secondly, invertase activity was determined from cell free extract by measuring the glucose released in the medium at 540 nm. Morphological observation of the most potent bacteria was examined by several identification tests using Bergey’s manual, which enabled us to know the genus of the isolate to be Bacillus. Furthermore, this potent bacterial colony was subjected to 16S rDNA PCR amplification and a single discrete PCR amplicon band of 1500 bp was observed. The 16S rDNA sequence was used to carry out BLAST alignment search tool of NCBI Genbank database to obtain maximum identity score of sequence. Molecular sequencing and identification was performed by Xcelris Labs Ltd. (Ahmedabad, India). The colony was identified as Bacillus sp. BAB-3434, indicating to be the first novel strain for extracellular invertase production. Molasses, a by-product of the sugarcane industry is a dark viscous liquid obtained upon crystallization of sugar. An enhanced invertase production and optimization studies were carried out by one-factor-at-a-time approach. Crucial parameters such as time course (24 h), pH (6.0), temperature (45 °C), inoculum size (2% v/v), N-source (yeast extract, 0.2% w/v) and C-source (molasses, 4% v/v) were found to be optimum demonstrating an increased yield. The findings of this study reveal a simple screening method of an extracellular invertase from a rapidly growing Bacillus sp., and selection of best factors that elevate enzyme activity especially utilization of molasses which served as an ideal substrate and also as C-source, results in a cost-effective production under submerged conditions. The invert mixture could be a replacement for table sugar which is an economic advantage and reduce the tedious work of sugar growers. On-going studies involve purification of extracellular invertase and determination of transfructosylating activity as at high concentration of sucrose, invertase produces fructooligosaccharides (FOS) which possesses probiotic properties.

Keywords: Bacillus sp., invertase, molasses, screening, submerged fermentation

Procedia PDF Downloads 222
92 The Politics of Identity and Retributive Genocidal Massacre against Chena Amhara under International Humanitarian Law

Authors: Gashaw Sisay Zenebe

Abstract:

Northern-Ethiopian conflict that broke out on 04 November 2020 between the central government and TPLF caused destruction beyond imagination in all aspects; millions of people have been killed, including civilians, mainly women, and children. Civilians have been indiscriminately attacked simply because of their ethnic or religious identity. Warrying parties committed serious crimes of international concern opposite to International Humanitarian Law (IHL). A House of People Representatives (HPR) declared that the terrorist Tigrean Defense Force (TDF), encompassing all segments of its people, waged war against North Gondar through human flooding. On Aug 30, 2021, after midnight, TDF launched a surprise attack against Chena People who had been drunk and deep slept due to the annual festivity. Unlike the lowlands, however, ENDF conjoined the local people to fight TDF in these Highland areas. This research examines identity politics and the consequential genocidal massacre of Chena, including its human and physical destructions that occurred as a result of the armed conflict. As such, the study could benefit international entities by helping them develop a better understanding of what happened in Chena and trigger interest in engaging in ensuring the accountability and enforcement of IHL in the future. Preserving fresh evidence will also serve as a starting point on the road to achieving justice either nationally or internationally. To study the Chena case evaluated against IHL rules, a combination of qualitative and doctrinal research methodology has been employed. The study basically follows a unique sampling case study which has used primary data tools such as observation, interview, key-informant interview, FGD, and battle-field notes. To supplement, however, secondary sources, including books, journal articles, domestic laws, international conventions, reports, and media broadcasts, were used to give meaning to what happened on the ground in light of international law. The study proved that the war was taking place to separate Tigray from Ethiopia. While undertaking military operations to achieve this goal, mass killings, genocidal acts, and war crimes were committed over Chena and approximate sites in the Dabat district of North Gondar. Thus, hundreds of people lost their lives to the brutalities of mass killings, hundreds of people were subjected to a forcible disappearance, and tens of thousands of people were forced into displacement. Furthermore, harsh beatings, forced labor, slavery, torture, rape, and gang rape have been reported, and generally, people are subjected to pass cruel, inhuman, and degrading treatment and punishment. Also, what is so unique is that animals were indiscriminately killed completely, making the environment unsafe for human survival because of pollution and bad smells and the consequent diseases such as Cholera, Flu, and Diarrhea. In addition to TDF, ENDF’s shelling has caused destruction to farmers’ houses & claimed lives. According to humanitarian principles, acts that can establish MACs and war crimes were perpetrated. Generally, the war in this direction has shown an absolute disrespect for international law norms.

Keywords: genocide, war crimes, Tigray Defense Force, Chena, IHL

Procedia PDF Downloads 49
91 A Comprehensive Approach to Create ‘Livable Streets’ in the Mixed Land Use of Urban Neighborhoods: A Case Study of Bangalore Street

Authors: K. C. Tanuja, Mamatha P. Raj

Abstract:

"People have always lived on streets. They have been the places where children first learned about the world, where neighbours met, the social centres of towns and cities, the rallying points for revolts, the scenes of repression. The street has always been the scene of this conflict, between living and access, between resident and traveller, between street life and the threat of death.” Livable Streets by Donald Appleyard. Urbanisation is happening rapidly all over the world. As population increasing in the urban settlements, its required to provide quality of life to all the inhabitants who live in. Urban design is a place making strategic planning. Urban design principles promote visualising any place environmentally, socially and economically viable. Urban design strategies include building mass, transit development, economic viability and sustenance and social aspects. Cities are wonderful inventions of diversity- People, things, activities, ideas and ideologies. Cities should be smarter and adjustable to present technology and intelligent system. Streets represent the community in terms of social and physical aspects. Streets are an urban form that responds to many issues and are central to urban life. Streets are for livability, safety, mobility, place of interest, economic opportunity, balancing the ecology and for mass transit. Urban streets are places where people walk, shop, meet and engage in different types of social and recreational activities which make urban community enjoyable. Streets knit the urban fabric of activities. Urban streets become livable with the introduction of social network enhancing the pedestrian character by providing good design features which in turn should achieve the minimal impact of motor vehicle use on pedestrians. Livable streets are the spatial definition to the public right of way on urban streets. Streets in India have traditionally been the public spaces where social life happened or created from ages. Streets constitute the urban public realm where people congregate, celebrate and interact. Streets are public places that can promote social interaction, active living and community identity. Streets as potential contributors to a better living environment, knitting together the urban fabric of people and places that make up a community. Livable streets or complete streets are making our streets as social places, roadways and sidewalks accessible, safe, efficient and useable for all people. The purpose of this paper is to understand the concept of livable street and parameters of livability on urban streets. Streets to be designed as the pedestrians are the main users and create spaces and furniture for social interaction which serves for the needs of the people of all ages and abilities. The problems of streets like congestion due to width of the street, traffic movement and adjacent land use and type of movement need to be redesigned and improve conditions defining the clear movement path for vehicles and pedestrians. Well-designed spatial qualities of street enhances the street environment, livability and then achieves quality of life to the pedestrians. A methodology been derived to arrive at the typologies in street design after analysis of existing situation and comparing with livable standards. It was Donald Appleyard‟s Livable Streets laid out the social effects on streets creating the social network to achieve Livable Streets.

Keywords: livable streets, social interaction, pedestrian use, urban design

Procedia PDF Downloads 136
90 Magnesium Nanoparticles for Photothermal Therapy

Authors: E. Locatelli, I. Monaco, R. C. Martin, Y. Li, R. Pini, M. Chiariello, M. Comes Franchini

Abstract:

Despite the many advantages of application of nanomaterials in the field of nanomedicine, increasing concerns have been expressed on their potential adverse effects on human health. There is urgency for novel green strategies toward novel materials with enhanced biocompatibility using safe reagents. Photothermal ablation therapy, which exploits localized heat increase of a few degrees to kill cancer cells, has appeared recently as a non-invasive and highly efficient therapy against various cancer types; anyway new agents able to generate hyperthermia when irradiated are needed and must have precise biocompatibility in order to avoid damage to healthy tissues and prevent toxicity. Recently, there has been increasing interest in magnesium as a biomaterial: it is the fourth most abundant cation in the human body, and it is essential for human metabolism. However magnesium nanoparticles (Mg NPs) have had limited diffusion due to the high reduction potential of magnesium cations, which makes NPs synthesis challenging. Herein, we report the synthesis of Mg NPs and their surface functionalization for the obtainment of a stable and biocompatible nanomaterial suitable for photothermal ablation therapy against cancer. We synthesized the Mg crystals by reducing MgCl2 with metallic lithium and exploiting naphthalene as an electron carrier: the lithium–naphthalene complex acts as the real reducing agent. Firstly, the nanocrystal particles were coated with the ligand 12-ethoxy ester dodecanehydroxamic acid, and then entrapped into water-dispersible polymeric micelles (PMs) made of the FDA-approved PLGA-b-PEG-COOH copolymer using the oil-in-water emulsion technique. Lately, we developed a more straightforward methodology by introducing chitosan, a highly biocompatible natural product, at the beginning of the process, simultaneously using lithium–naphthalene complex, thus having a one-pot procedure for the formation and surface modification of MgNPs. The obtained MgNPs were purified and fully characterized, showing diameters in the range of 50-300 nm. Notably, when coated with chitosan the particles remained stable as dry powder for more than 10 months. We proved the possibility of generating a temperature rise of a few to several degrees once MgNPs were illuminated using a 810 nm diode laser operating in continuous wave mode: the temperature rise resulted significant (0-15 °C) and concentration dependent. We then investigated potential cytotoxicity of the MgNPs: we used HN13 epithelial cells, derived from a head and neck squamous cell carcinoma and the hepa1-6 cell line, derived from hepatocellular carcinoma and very low toxicity was observed for both nanosystems. Finally, in vivo photothermal therapy was performed on xenograft hepa1-6 tumor bearing mice: the animals were treated with MgNPs coated with chitosan and showed no sign of suffering after the injection. After 12 hours the tumor was exposed to near-infrared laser light. The results clearly showed an extensive damage to tumor tissue after only 2 minutes of laser irradiation at 3Wcm-1, while no damage was reported when the tumor was treated with the laser and saline alone in control group. Despite the lower photothermal efficiency of Mg with respect to Au NPs, we consider MgNPs a promising, safe and green candidate for future clinical translations.

Keywords: chitosan, magnesium nanoparticles, nanomedicine, photothermal therapy

Procedia PDF Downloads 259
89 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 165
88 Preliminary Results on Marine Debris Classification in The Island of Mykonos (Greece) via Coastal and Underwater Clean up over 2016-20: A Successful Case of Recycling Plastics into Useful Daily Items

Authors: Eleni Akritopoulou, Katerina Topouzoglou

Abstract:

The last 20 years marine debris has been identified as one of the main marine pollution sources caused by anthropogenic activities. Plastics has reached the farthest marine areas of the planet affecting all marine trophic levels including the, recently discovered, amphipoda Eurythenes plasticus inhabiting Mariana Trench to large cetaceans, marine reptiles and sea birds causing immunodeficiency disorders, deteriorating health and death overtime. For the time period 2016-20, in the framework of the national initiative ‘Keep Aegean Blue”, All for Blue team has been collecting marine debris (coastline and underwater) following a modified in situ MEDSEALITTER monitoring protocol from eight Greek islands. After collection, marine debris was weighted, sorted and categorised according to material; plastic (PL), glass (G), metal (M), wood (W), rubber (R), cloth (CL), paper (P), mixed (MX). The goal of the project included the documentation of marine debris sources, human trends, waste management and public marine environmental awareness. Waste management was focused on plastics recycling and utilisation into daily useful products. This research is focused on the island of Mykonos due to its continuous touristic activity and lack of scientific information. In overall, a field work area of 1.832.856 m2 was cleaned up yielding 5092 kg of marine debris. The preliminary results indicated PL as main source of marine debris (62,8%) followed by M (15,5%), GL (13,2%) and MX (2,8%). Main items found were fishing tools (lines, nets), disposable cutlery, cups and straws, cigarette butts, flip flops and other items like plastic boat compartments. In collaboration with a local company for plastic management and the Circular Economy and Eco Innovation Institute (Sweden), all plastic debris was recycled. Granulation process was applied transforming plastic into building materials used for refugees’ houses, litter bins bought by municipalities and schools and, other items like shower components. In terms of volunteering and attendance in public awareness seminars, there was a raise of interest by 63% from different age ranges and professions. Regardless, the research being fairly new for Mykonos island and logistics issues potentially affected systemic sampling, it appeared that plastic debris is the main littering source attributed, possibly to the intense touristic activity of the island all year around. However, marine environmental awareness activities were pointed out to be an effective tool in forming public perception against marine debris and, alter the daily habits of local society. Since the beginning of this project, three new local environmental teams were formed against marine pollution supported by the local authorities and stakeholders. The continuous need and request for the production of items made by recycled marine debris appeared to be beneficial socio-economically to the local community and actions are taken to expand the project nationally. Finally, as an ongoing project and whilst, new scientific information is collected, further funding and research is needed.

Keywords: Greece, marine debris, marine environmental awareness, Mykonos island, plastics debris, plastic granulation, recycled plastic, tourism, waste management

Procedia PDF Downloads 98
87 Mapping Contested Sites - Permanence Of The Temporary Mouttalos Case Study

Authors: M. Hadjisoteriou, A. Kyriacou Petrou

Abstract:

This paper will discuss ideas of social sustainability in urban design and human behavior in multicultural contested sites. It will focus on the potential of the re-reading of the “site” through mapping that acts as a research methodology and will discuss the chosen site of Mouttalos, Cyprus as a place of multiple identities. Through a methodology of mapping using a bottom up approach, a process of disassembling derives that acts as a mechanism to re-examine space and place by searching for the invisible and the non-measurable, understanding the site through its detailed inhabitation patterns. The significance of this study lies in the use of mapping as an active form of thinking rather than a passive process of representation that allows for a new site to be discovered, giving multiple opportunities for adaptive urban strategies and socially engaged design approaches. We will discuss the above thematic based on the chosen contested site of Mouttalos, a small Turkish Cypriot neighbourhood, in the old centre of Paphos (Ktima), SW of Cyprus. During the political unrest, between Greek and Turkish Cypriot communities, in 1963, the area became an enclave to the Turkish Cypriots, excluding any contact with the rest of the area. Following the Turkish invasion of 1974, the residents left their homes, plots and workplaces, resettling in the North of Cyprus. Greek Cypriot refugees moved into the area. The presence of the Greek Cypriot refugees is still considered to be a temporary resettlement. The buildings and the residents themselves exist in a state of uncertainty. The site is documented through a series of parallel investigations into the physical conditions and history of the site. Research methodologies use the process of mapping to expose the complex and often invisible layers of information that coexist. By registering the site through the subjective experiences, and everyday stories of inhabitants, a series of cartographic recordings reveals the space between: happening and narrative and especially space between different cultures and religions. Research put specific emphasis on engaging the public, promoting social interaction, identifying spatial patterns of occupation by previous inhabitants through social media. Findings exposed three main areas of interest. Firstly we identified inter-dependent relationships between permanence and temporality, characterised by elements such us, signage through layers of time, past events and periodical street festivals, unfolding memory and belonging. Secondly issues of co-ownership and occupation, found through particular narratives of exchange between the two communities and through appropriation of space. Finally formal and informal inhabitation of space, revealed through the presence of informal shared back yards, alternative paths, porous street edges and formal and informal landmarks. The importance of the above findings, was achieving a shift of focus from the built infrastructure to the soft network of multiple and complex relations of dependence and autonomy. Proposed interventions for this contested site were informed and led by a new multicultural identity where invisible qualities were revealed though the process of mapping, taking on issues of layers of time, formal and informal inhabitation and the “permanence of the temporary”.

Keywords: contested sites, mapping, social sustainability, temporary urban strategies

Procedia PDF Downloads 408
86 Unity in Diversity: Exploring the Psychological Processes and Mechanisms of the Sense of Community for the Chinese Nation in Ethnic Inter-embedded Communities

Authors: Jiamin Chen, Liping Yang

Abstract:

In 2007, sociologist Putnam proposed a pessimistic forecast in the United States' "Social Capital Community Benchmark Survey," suggesting that "ethnic diversity would challenge social unity and undermine social cohesion." If this pessimistic assumption were proven true, it would indicate a risk of division in diverse societies. China, with 56 ethnic groups, is a multi-ethnic country. On May 26, 2014, General Secretary Xi Jinping proposed "building ethnically inter-embedded communities to promote deeper development in interactions, exchanges, and integration among ethnic groups." Researchers unanimously agree that ethnic inter-embedded communities can serve as practical arenas and pathways for solidifying the sense of the Chinese national community However, there is no research providing evidence that ethnic inter-embedded communities can foster the sense of the Chinese national community, and the influencing factors remain unclear. This study adopts a constructivist grounded theory research approach. Convenience sampling and snowball sampling were used in the study. Data were collected in three communities in Kunming City. Twelve individuals were eventually interviewed, and the transcribed interviews totaled 187,000 words. The research has obtained ethical approval from the Ethics Committee of Nanjing Normal University (NNU202310030). The research analyzed the data and constructed theories, employing strategies such as coding, constant comparison, and theoretical sampling. The study found that: firstly, ethnic inter-embedded communities exhibit characteristics of diversity, including ethnic diversity, cultural diversity, and linguistic diversity. Diversity has positive functions, including increased opportunities for contact, promoting self-expansion, and increasing happiness; negative functions of diversity include highlighting ethnic differences, causing ethnic conflicts, and reminding of ethnic boundaries. Secondly, individuals typically engage in interactions within the community using active embedding and passive embedding strategies. Active embedding strategies include maintaining openness, focusing on similarities, and pro-diversity beliefs, which can increase external group identification, intergroup relational identity, and promote ethnic integration. Individuals using passive embedding strategies tend to focus on ethnic stereotypes, perceive stigmatization of their own ethnic group, and adopt an authoritarian-oriented approach to interactions, leading to a perception of more identity threats and ultimately rejecting ethnic integration. Thirdly, the commonality of the Chinese nation is reflected in the 56 ethnic groups as an "identity community" and "interest community," and both active and passive embedding paths affect individual understanding of the commonality of the Chinese nation. Finally, community work and environment can influence the embedding process. The research constructed a social psychological process and mechanism model for solidifying sense of the Chinese national community in ethnic inter-embedded communities. Based on this theoretical model, future research can conduct more micro-level psychological mechanism tests and intervention studies to enhance Chinese national cohesion.

Keywords: diversity, sense of the chinese national community, ethnic inter-embedded communities, ethnic group

Procedia PDF Downloads 24
85 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects

Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm

Abstract:

Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.

Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology

Procedia PDF Downloads 159
84 Green Architecture from the Thawing Arctic: Reconstructing Traditions for Future Resilience

Authors: Nancy Mackin

Abstract:

Historically, architects from Aalto to Gaudi to Wright have looked to the architectural knowledge of long-resident peoples for forms and structural principles specifically adapted to the regional climate, geology, materials availability, and culture. In this research, structures traditionally built by Inuit peoples in a remote region of the Canadian high Arctic provides a folio of architectural ideas that are increasingly relevant during these times of escalating carbon emissions and climate change. ‘Green architecture from the Thawing Arctic’ researches, draws, models, and reconstructs traditional buildings of Inuit (Eskimo) peoples in three remote, often inaccessible Arctic communities. Structures verified in pre-contact oral history and early written history are first recorded in architectural drawings, then modeled and, with the participation of Inuit young people, local scientists, and Elders, reconstructed as emergency shelters. Three full-sized building types are constructed: a driftwood and turf-clad A-frame (spring/summer); a stone/bone/turf house with inwardly spiraling walls and a fan-shaped floor plan (autumn); and a parabolic/catenary arch-shaped dome from willow, turf, and skins (autumn/winter). Each reconstruction is filmed and featured in a short video. Communities found that the reconstructed buildings and the method of involving young people and Elders in the reconstructions have on-going usefulness, as follows: 1) The reconstructions provide emergency shelters, particularly needed as climate change worsens storms, floods, and freeze-thaw cycles and scientists and food harvesters who must work out of the land become stranded more frequently; 2) People from the communities re-learned from their Elders how to use materials from close at hand to construct impromptu shelters; 3) Forms from tradition, such as windbreaks at entrances and using levels to trap warmth within winter buildings, can be adapted and used in modern community buildings and housing; and 4) The project initiates much-needed educational and employment opportunities in the applied sciences (engineering and architecture), construction, and climate change monitoring, all offered in a culturally-responsive way. Elders, architects, scientists, and young people added innovations to the traditions as they worked, thereby suggesting new sustainable, culturally-meaningful building forms and materials combinations that can be used for modern buildings. Adding to the growing interest in bio-mimicry, participants looked at properties of Arctic and subarctic materials such as moss (insulation), shrub bark (waterproofing), and willow withes (parabolic and catenary arched forms). ‘Green Architecture from the Thawing Arctic’ demonstrates the effective, useful architectural oeuvre of a resilient northern people. The research parallels efforts elsewhere in the world to revitalize long-resident peoples’ architectural knowledge, in the interests of designing sustainable buildings that reflect culture, heritage, and identity.

Keywords: architectural culture and identity, climate change, forms from nature, Inuit architecture, locally sourced biodegradable materials, traditional architectural knowledge, traditional Inuit knowledge

Procedia PDF Downloads 508
83 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion

Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty

Abstract:

Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.

Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors

Procedia PDF Downloads 140
82 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study

Authors: Kaaryn Cater

Abstract:

People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.

Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity

Procedia PDF Downloads 52
81 Elevated Systemic Oxidative-Nitrosative Stress and Cerebrovascular Function in Professional Rugby Union Players: The Link to Impaired Cognition

Authors: Tom S. Owens, Tom A. Calverley, Benjamin S. Stacey, Christopher J. Marley, George Rose, Lewis Fall, Gareth L. Jones, Priscilla Williams, John P. R. Williams, Martin Steggall, Damian M. Bailey

Abstract:

Introduction and aims: Sports-related concussion (SRC) represents a significant and growing public health concern in rugby union, yet remains one of the least understood injuries facing the health community today. Alongside increasing SRC incidence rates, there is concern that prior recurrent concussion may contribute to long-term neurologic sequelae in later-life. This may be due to an accelerated decline in cerebral perfusion, a major risk factor for neurocognitive decline and neurodegeneration, though the underlying mechanisms remain to be established. The present study hypothesised that recurrent concussion in current professional rugby union players would result in elevated systemic oxidative-nitrosative stress, reflected by a free radical-mediated reduction in nitric oxide (NO) bioavailability and impaired cerebrovascular and cognitive function. Methodology: A longitudinal study design was adopted across the 2017-2018 rugby union season. Ethical approval was obtained from the University of South Wales Ethics Committee. Data collection is ongoing, and therefore the current report documents result from the pre-season and first half of the in-season data collection. Participants were initially divided into two subgroups; 23 professional rugby union players (aged 26 ± 5 years) and 22 non-concussed controls (27 ± 8 years). Pre-season measurements were performed for cerebrovascular function (Doppler ultrasound of middle cerebral artery velocity (MCAv) in response to hypocapnia/normocapnia/hypercapnia), cephalic venous concentrations of the ascorbate radical (A•-, electron paramagnetic resonance spectroscopy), NO (ozone-based chemiluminescence) and cognition (neuropsychometric tests). Notational analysis was performed to assess contact in the rugby group throughout each competitive game. Results: 1001 tackles and 62 injuries, including three concussions were observed across the first half of the season. However, no associations were apparent between number of tackles and any injury type (P > 0.05). The rugby group expressed greater oxidative stress as indicated by increased A•- (P < 0.05 vs. control) and a subsequent decrease in NO bioavailability (P < 0.05 vs. control). The rugby group performed worse in the Ray Auditory Verbal Learning Test B (RAVLT-B, learning, and memory) and the Grooved Pegboard test using both the dominant and non-dominant hands (visuomotor coordination, P < 0.05 vs. control). There were no between-group differences in cerebral perfusion at baseline (MCAv: 54 ± 13 vs. 59 ± 12, P > 0.05). Likewise, no between-group differences in CVRCO2Hypo (2.58 ± 1.01 vs. 2.58 ± 0.75, P > 0.05) or CVRCO2Hyper (2.69 ± 1.07 vs. 3.35 ± 1.28, P > 0.05) were observed. Conclusion: The present study identified that the rugby union players are characterized by impaired cognitive function subsequent to elevated systemic-oxidative-nitrosative stress. However, this appears to be independent of any functional impairment in cerebrovascular function. Given the potential long-term trajectory towards accelerated cognitive decline in populations exposed to SRC, prophylaxis to increase NO bioavailability warrants consideration.

Keywords: cognition, concussion, mild traumatic brain injury, rugby

Procedia PDF Downloads 162
80 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 118
79 La0.80Ag0.15MnO3 Magnetic Nanoparticles for Self-Controlled Magnetic Fluid Hyperthermia

Authors: Marian Mihalik, Kornel Csach, Martin Kovalik, Matúš Mihalik, Martina Kubovčíková, Maria Zentková, Martin Vavra, Vladimír Girman, Jaroslav Briančin, Marija Perovic, Marija Boškovic, Magdalena Fitta, Robert Pelka

Abstract:

Current nanomaterials for use in biomedicine are based mainly on iron oxides and on present knowledge on magnetic nanostructures. Manganites can represent another material which can be used optionally. Manganites and their unique electronic properties have been extensively studied in the last decades not only due to fundamental interest but to possible applications of colossal magnetoresistance, magnetocaloric effect, and ferroelectric properties. It was found that the oxygen-reduction reaction on perovskite oxide is intimately connected with metal ion e.g., orbital occupation. The effect of oxygen deviation from the stoichiometric composition on crystal structure was studied very carefully by many authors on LaMnO₃. Depending on oxygen content, the crystal structure changes from orthorhombic one to rhombohedric for oxygen content 3.1. In the case of hole-doped manganites, the change from the orthorhombic crystal structure, which is typical for La1-xCaxMnO3 based manganites, to the rhombohedric crystal structure (La1-xMxMnO₃ where M = K, Ag, and Sr based materials) results in an enormous increase of the Curie temperature. In our paper, we study the effect of oxygen content on crystal structure, thermal, and magnetic properties (including magnetocaloric effect) of La1-xAgxMnO₃nano particle system. The content of oxygen in samples was tuned by heat treatment in different thermal regimes and in various environment (air, oxygen, argon). Water nanosuspensions based on La0.80Ag0.15MnO₃ magnetic particles with the Curie temperature of about 43oC were prepared by two different approaches. First, by using a laboratory circulation mill for milling of powder in the presence of sodium dodecyl sulphate (SDS) and subsequent centrifugation. Second nanosuspension was prepared using an agate bowl, etching in citric acid and HNO3, ultrasound homogeniser, centrifugation, and dextran 40 kDA or 15 kDA as surfactant. Electrostatic stabilisation obtained by the first approach did not offer long term kinetic and aggregation colloidal stability and was unable to compensate for attractive forces between particles under a magnetic field. By the second approach, we prepared suspension oversaturated by dextran 40 kDA for steric stabilisation, with evidence of the presence of superparamagnetic behaviour. Low concentration of nanoparticles and not ideal coverage of nanoparticles impacting the stability of ferrofluids was the disadvantage of this approach. Strong steric stabilisation was observable at alcaic conditions under pH = ~10. Application of dextran 15 kDA leads to relatively stable ferrofluid with pH around physiological conditions, but desegregation of powder by HNO₃ was not effective enough, and the average size of fragments was to large of about 150 nm, and we did not see any signature of superparamagnetic behaviour. The prepared ferrofluids were characterised by scanning and transition microscope method, thermogravimetry, magnetization, and AC susceptibility measurements. Specific Absorption Rate measurements were undertaken on powder as well on ferrofluids in order to estimate the potential application of La₀.₈₀Ag₀.₁₅MnO₃ magnetic particles based ferrofluid for hyperthermia. Our complex study contains an investigation of biocompatibility and potential biohazard of this material.

Keywords: manganites, magnetic nanoparticles, oxygen content, magnetic phase transition, magnetocaloric effect, ferrofluid, hyperthermia

Procedia PDF Downloads 77
78 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe

Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira

Abstract:

Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.

Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust

Procedia PDF Downloads 253
77 A Comparative Evaluation of Cognitive Load Management: Case Study of Postgraduate Business Students

Authors: Kavita Goel, Donald Winchester

Abstract:

In a world of information overload and work complexities, academics often struggle to create an online instructional environment enabling efficient and effective student learning. Research has established that students’ learning styles are different, some learn faster when taught using audio and visual methods. Attributes like prior knowledge and mental effort affect their learning. ‘Cognitive load theory’, opines learners have limited processing capacity. Cognitive load depends on the learner’s prior knowledge, the complexity of content and tasks, and instructional environment. Hence, the proper allocation of cognitive resources is critical for students’ learning. Consequently, a lecturer needs to understand the limits and strengths of the human learning processes, various learning styles of students, and accommodate these requirements while designing online assessments. As acknowledged in the cognitive load theory literature, visual and auditory explanations of worked examples potentially lead to a reduction of cognitive load (effort) and increased facilitation of learning when compared to conventional sequential text problem solving. This will help learner to utilize both subcomponents of their working memory. Instructional design changes were introduced at the case site for the delivery of the postgraduate business subjects. To make effective use of auditory and visual modalities, video recorded lectures, and key concept webinars were delivered to students. Videos were prepared to free up student limited working memory from irrelevant mental effort as all elements in a visual screening can be viewed simultaneously, processed quickly, and facilitates greater psychological processing efficiency. Most case study students in the postgraduate programs are adults, working full-time at higher management levels, and studying part-time. Their learning style and needs are different from other tertiary students. The purpose of the audio and visual interventions was to lower the students cognitive load and provide an online environment supportive to their efficient learning. These changes were expected to impact the student’s learning experience, their academic performance and retention favourably. This paper posits that these changes to instruction design facilitates students to integrate new knowledge into their long-term memory. A mixed methods case study methodology was used in this investigation. Primary data were collected from interviews and survey(s) of students and academics. Secondary data were collected from the organisation’s databases and reports. Some evidence was found that the academic performance of students does improve when new instructional design changes are introduced although not statistically significant. However, the overall grade distribution of student’s academic performance has changed and skewed higher which shows deeper understanding of the content. It was identified from feedback received from students that recorded webinars served as better learning aids than material with text alone, especially with more complex content. The recorded webinars on the subject content and assessments provides flexibility to students to access this material any time from repositories, many times, and this enhances students learning style. Visual and audio information enters student’s working memory more effectively. Also as each assessment included the application of the concepts, conceptual knowledge interacted with the pre-existing schema in the long-term memory and lowered student’s cognitive load.

Keywords: cognitive load theory, learning style, instructional environment, working memory

Procedia PDF Downloads 129
76 Sensing Study through Resonance Energy and Electron Transfer between Föster Resonance Energy Transfer Pair of Fluorescent Copolymers and Nitro-Compounds

Authors: Vishal Kumar, Soumitra Satapathi

Abstract:

Föster Resonance Energy Transfer (FRET) is a powerful technique used to probe close-range molecular interactions. Physically, the FRET phenomenon manifests as a dipole–dipole interaction between closely juxtaposed fluorescent molecules (10–100 Å). Our effort is to employ this FRET technique to make a prototype device for highly sensitive detection of environment pollutant. Among the most common environmental pollutants, nitroaromatic compounds (NACs) are of particular interest because of their durability and toxicity. That’s why, sensitive and selective detection of small amounts of nitroaromatic explosives, in particular, 2,4,6-trinitrophenol (TNP), 2,4-dinitrotoluene (DNT) and 2,4,6-trinitrotoluene (TNT) has been a critical challenge due to the increasing threat of explosive-based terrorism and the need of environmental monitoring of drinking and waste water. In addition, the excessive utilization of TNP in several other areas such as burn ointment, pesticides, glass and the leather industry resulted in environmental accumulation, and is eventually contaminating the soil and aquatic systems. To the date, high number of elegant methods, including fluorimetry, gas chromatography, mass, ion-mobility and Raman spectrometry have been successfully applied for explosive detection. Among these efforts, fluorescence-quenching methods based on the mechanism of FRET show good assembly flexibility, high selectivity and sensitivity. Here, we report a FRET-based sensor system for the highly selective detection of NACs, such as TNP, DNT and TNT. The sensor system is composed of a copolymer Poly [(N,N-dimethylacrylamide)-co-(Boc-Trp-EMA)] (RP) bearing tryptophan derivative in the side chain as donor and dansyl tagged copolymer P(MMA-co-Dansyl-Ala-HEMA) (DCP) as an acceptor. Initially, the inherent fluorescence of RP copolymer is quenched by non-radiative energy transfer to DCP which only happens once the two molecules are within Förster critical distance (R0). The excellent spectral overlap (Jλ= 6.08×10¹⁴ nm⁴M⁻¹cm⁻¹) between donors’ (RP) emission profile and acceptors’ (DCP) absorption profile makes them an exciting and efficient FRET pair i.e. further confirmed by the high rate of energy transfer from RP to DCP i.e. 0.87 ns⁻¹ and lifetime measurement by time correlated single photon counting (TCSPC) to validate the 64% FRET efficiency. This FRET pair exhibited a specific fluorescence response to NACs such as DNT, TNT and TNP with 5.4, 2.3 and 0.4 µM LODs, respectively. The detection of NACs occurs with high sensitivity by photoluminescence quenching of FRET signal induced by photo-induced electron transfer (PET) from electron-rich FRET pair to electron-deficient NAC molecules. The estimated stern-volmer constant (KSV) values for DNT, TNT and TNP are 6.9 × 10³, 7.0 × 10³ and 1.6 × 104 M⁻¹, respectively. The mechanistic details of molecular interactions are established by time-resolved fluorescence, steady-state fluorescence and absorption spectroscopy confirmed that the sensing process is of mixed type, i.e. both dynamic and static quenching as lifetime of FRET system (0.73 ns) is reduced to 0.55, 0.57 and 0.61 ns DNT, TNT and TNP, respectively. In summary, the simplicity and sensitivity of this novel FRET sensor opens up the possibility of designing optical sensor of various NACs in one single platform for developing multimodal sensor for environmental monitoring and future field based study.

Keywords: FRET, nitroaromatic, stern-Volmer constant, tryptophan and dansyl tagged copolymer

Procedia PDF Downloads 121
75 Solymorph: Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance

Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi

Abstract:

Solymorph, a kinetic building facade designed for optimal energy capture and architectural expression, is explored in this paper. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of novel facade systems is necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, Solymorph leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, 3D printing, and laser cutting, were utilized to fabricate the physical components. Finally, a modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of Solymorph to an existing library building at Politecnico di Milano is presented. The facade system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. Solymorph demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, Solymorph paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.

Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building

Procedia PDF Downloads 42
74 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains

Authors: Jing Jin

Abstract:

The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.

Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry

Procedia PDF Downloads 43
73 Reviving Customs: Examining the Vernacular Habitus in Modern Marathi Film via the Tamasha Genre

Authors: Amar Ramesh Wayal

Abstract:

Marathi cinema, an integral part of India’s diverse film industry, has significantly evolved in its storytelling and aesthetics, with the Tamasha genre being central to this evolution. Tamasha, a traditional form of Marathi theatre, features vibrant dance and music, especially the rhythmic and often suggestive musical genre, lavani. It gained cinematic prominence in the 1960s with Anant Mane’s Sangtye Aika (1959), which brought and popularized Tamasha to the silver screen, and V. Shantaram’s Pinjra (1972), an iconic Tamasha drama. Despite early success, Tamasha films declined in popularity until Natarang (2010) revitalized interest in this traditional form. This study examines the relevance and evolution of the Tamasha genre in Marathi cinema through contemporary films like Ek Hota Vidushak by Jabbar Patel (1992), Natarang (2010) by Ravi Jadhav, and Tamasha Live (2022) by Sanjay Jadhav. The selection of the films is based on their significant roles in the evolution of the Tamasha in Marathi cinema. Ek Hota Vidushak explores socio-political themes through Tamasha, Natarang depicts the struggles and emotional depth of Tamasha performers, and Tamasha Live integrates traditional Tamasha into modern cinema. By analysing films from different periods, this study highlights the genre’s reinterpretation and adaptation over time. The study employs a qualitative approach, utilizing textual analysis and cultural critique to examine the portrayal and evolution of Tamasha in selected films. It aims to illuminate the complex relationship between tradition and modernity in Marathi cinema through Foucauldian discourse analysis and Pierre Bourdieu’s concept of “vernacular habitus,” which refers to local, indigenous cultural spaces that shape people’s perceptions and expressions. By analyzing these films, the study seeks to understand how traditional cultural forms are integrated into contemporary cinematic narratives. However, this method has limitations, such as subjectivity in interpretation and the need for extensive contextual knowledge. Qualitative research can be subject to researcher bias, affecting analysis and conclusions. To mitigate this, this study maintains rigorous reflexivity and transparency regarding the researcher’s positionality. Furthermore, findings from specific film analyses may not be universally applicable to all Tamasha films or broader Marathi cinema. To enhance the study’s robustness, future research could incorporate comparative or quantitative data to complement qualitative insights. Despite these challenges, qualitative research is crucial for exploring cultural artifacts and their significance within specific contexts. By triangulating qualitative findings with diverse perspectives and acknowledging limitations, this study aims to provide a nuanced understanding of how Tamasha cinema preserves and revitalizes Maharashtra’s folk traditions while adapting them to contemporary contexts. Analyzing films by Jabbar Patel, Ravi Jadhav, and Sanjay Jadhav shows how these filmmakers balance traditional aesthetics with modern storytelling, bridging historical continuity with contemporary relevance. This study offers insights into how indigenous traditions like Tamasha continue to shape and define cinematic narratives in Maharashtra.

Keywords: Marathi cinema, Tamasha genre, vernacular habitus, discourse analysis, cultural evolution

Procedia PDF Downloads 16