Search results for: deformable objects
28 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years
Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding
Procedia PDF Downloads 17527 On the Lithology of Paleocene-Lower Eocene Deposits of the Achara-Trialeti Fold Zone: The Lesser Caucasus
Authors: Nino Kobakhidze, Endi Varsimashvili, Davit Makadze
Abstract:
The Caucasus is a link of the Alpine-Himalayan fold belt and involves the Greater Caucasus and the Lesser Caucasus fold systems and the Intermountain area. The study object is located within the northernmost part of the Lesser Caucasus orogen, in the eastern part of Achara-Trialeti fold -thrust belt. This area was rather well surveyed in 70th of the twentieth century in terms of oil-and-gas potential, but to our best knowledge, detailed sedimentological studies have not been conducted so far. In order to fill this gap, the authors of the present thesis started research in this direction. One of the objects selected for the research was the deposits of the Kavtura river valley situated on the northern slope of the Trialeti ridge. Paleocene-Lower Eocene deposits known in scientific literature as ‘Borjomi Flysch’ (Turbidites) are exposed in the mentioned area. During the research, the following methodologies were applied: selection of key cross sections, a collection of rock samples, microscopic description of thin sections, mineralogical and petrological analysis of material and identification of trace fossils. The study of Paleocene-Lower Eocene deposits starts with Kavtura river valley in the east, where they are well characterized by microfauna. The cross-section of the deposits starts with Danian variegated marlstone conformably overlain by the alternation of thick and thin-bedded sandstones (thickness 40-50 cm). They are continued with interbedded of thin-bedded sandstones and shales(thickness 4-5 m). On the sole surface of sandstones ichnogenera ‘Helmintopsis’ and ‘Scolicia’ are recorded and within the bed –‘Chondrites’ is found. Towards the Riverhead, there is a 1-2 m gap in sedimentation; then again the Paleocene-Lower Eocene sediments crop out. They starting with alternation of grey-green medium-grained sandstones and shales enclosing dark color plant detritus. They are overlain by the interbedded of calcareous sandstones and marls, where the thickness of sandstones is variable (20-70 cm). Ichnogenus – ‘Scolicia’ is found here. Upwards the above-mentioned deposits pass into Middle Eocenian volcanogenic-sedimentary suits. In the Kavtura river valley, the thickness of the Paleocene-Lower Eocene deposits is 300-400 m. In the process of research, the following activities are conducted: the facial analysis of host rocks, correlation of the study section with other cross sections and interpretation of depositional environment of the area. In the area the authors have found and described ichnogenera; their preliminary determination have shown that they belong to pre-depositional (‘Helmintopsis’) and post-depositional (‘Chondrites’) forms. As known, during the Cretaceous-Paleogene time, the Achara-Trialeti fold-thrust belt extensional basin was the accumulation area with great thicknesses (from shallow to deep marine sediments). It is confirmed once more by the authors investigations preliminary results of paleoichnological studies inclusive.Keywords: flysh deposits, lithology, The Lesser Caucasus, trace fossils
Procedia PDF Downloads 16426 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations
Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos
Abstract:
Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest
Procedia PDF Downloads 17725 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices
Authors: Zhuang Yiwen
Abstract:
The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms
Procedia PDF Downloads 7724 Response of Subfossile Diatoms, Cladocera, and Chironomidae in Sediments of Small Ponds to Changes in Wastewater Discharges from a Zn–Pb Mine
Authors: Ewa Szarek-Gwiazda, Agata Z. Wojtal, Agnieszka Pociecha, Andrzej Kownacki, Dariusz Ciszewski
Abstract:
Mining of metal ores is one of the largest sources of heavy metals, which deteriorate aquatic systems. The response of organisms to environmental changes can be well recorded in sediments of the affected water bodies and may be reconstructed based on analyses of organisms' remains. The present study aimed at the response of diatoms (Bacillariophyta), Cladocera, and Chironomidae communities to the impact of Zn-Pb mine water discharge recorded in sediment cores of small subsidence ponds on the Chechło River floodplain (Silesia–Krakow Region, southern Poland). We hypothesize various responses of the above groups to high metal concentrations (Cd, Pb, Zn, and Cu). The investigated ponds were formed either during the peak of the ore exploitation (DOWN) or after mining cessation (UP). Currently, the concentrations of dissolved metals (in µg g⁻¹) in water reached up to 0.53 for Cd, 7.3 for Pb, and up to 47.1 for Zn. All the sediment cores from subsidence ponds were heavily polluted with Cd 6.7–612 μg g⁻¹, Pb 0.1–10.2 mg g⁻¹, and Zn 0.5–23.1 mg g⁻¹. Core sediments varied also in respect to pH 5.8-7.1 and concentrations of organic matter (5.7-39.8%). The impact of high metal concentrations was expressed by the occurrence of metal-tolerant taxa like diatoms – Nitzschia amphibia, Sellaphora nigri, and Surirella brebisonii var. kuetzingii; Cladocera – Chydorus sphaericus (dominated in cores from all ponds), and Chironomidae – Chironomus and Cricotopus especially in the DOWN ponds. Statistical analysis exhibited a negative impact of metals on some taxa of diatoms and Cladocera but only on Polypedilum sp. from Chironomidae. The abundance of such diatoms like Gomphonema utae, Staurosirella pinnata, Eunotia bilunaris, and Cladocera like Alona, Chydorus, Graptoleberis, and Pleuroxus decreased with increasing Pb concentration. However, the occurrence or dominance of more sensitive species of diatoms and Cladocera indicates their adaptation to higher metal loads, which was facilitated by neutral pH and slightly alkaline waters. Diatom assemblages were generally resistant to Zn, Pb, Cu, and Cd pollution, as indicated by their large similarity to populations from non-contaminated waters. Comparison with reference objects clearly indicates the dominance of Achnanthidium minutissimum, Staurosira venter, and Fragilaria gracilis in very diverse assemblages of unpolluted waters. The distribution of the Cladocera and Chironomidae taxa depended on the habitat type. The DOWN ponds with stagnant water and overgrown with macrophytes were more suitable for cladocerans (14 taxa, higher diversity) than the UP ponds with river water flowing through their centre and with a small share of macrophytes (8 taxa). The Chironominae, mainly Chironomus and Microspectra, were abundant in cores from the UP ponds with muddy bottoms. Inversely, the density of Orthocladiinae, especially genus Cricotopus, was related to the organic matter content and dominated in cores from the DOWN ponds. The presence of diatoms like Nitzschia amphibia, Sellaphora nigri, and Surirella brebisonii var. kuetzingii, cladocerans: Bosmina longirostris, Chydorus sphaericus, Alona affinis, and A. rectangularis as well as Chironomidae Chironomus sp. (UP ponds) and Psecrotanypus varius (DOWN ponds) indicate the influence of the water trophy on their distribution.Keywords: Chironomidae, Cladocera, diatoms, metals, Zn-Pb mine, sediment cores, subsidence ponds
Procedia PDF Downloads 7723 Effects of Live Webcast-Assisted Teaching on Physical Assessment Technique Learning of Young Nursing Majors
Authors: Huey-Yeu Yan, Ching-Ying Lee, Hung-Ru Lin
Abstract:
Background: Physical assessment is a vital clinical nursing competence. The gap between conventional teaching method and the way e-generation students’ preferred could be bridged owing to the support of Internet technology, i.e. interacting with online media to manage learning works. Nursing instructors in the wake of new learning pattern of the e-generation students are challenged to actively adjust and make teaching contents and methods more versatile. Objective: The objective of this research is to explore the effects on teaching and learning with live webcast-assisted on a specific topic, Physical Assessment technique, on a designated group of young nursing majors. It’s hoped that, with a way of nursing instructing, more versatile learning resources may be provided to facilitate self-directed learning. Design: This research adopts a cross-sectional descriptive survey. The instructor demonstrated physical assessment techniques and operation procedures via live webcast broadcasted online to all students. It increased both the off-time interaction between teacher and students concerning teaching materials. Methods: A convenient sampling was used to recruit a total of 52 nursing-majors at a certain university. The nursing majors took two-hour classes of Physical Assessment per week for 18 weeks (36 hrs. in total). The instruction covered four units with live webcasting and then conducted an online anonymous survey of learning outcomes by questionnaire. The research instrument was the online questionnaire, covering three major domains—online media used, learning outcome evaluation and evaluation result. The data analysis was conducted via IBM SPSS Statistics Version 2.0. The descriptive statistics was undertaken to describe the analysis of basic data and learning outcomes. Statistical methods such as descriptive statistics, t-test, ANOVA, and Pearson’s correlation were employed in verification. Results: Results indicated the following five major findings. (1) learning motivation, about four fifth of the participants agreed the online instruction resources are very helpful in improving learning motivation and raising the learning interest. (2) learning needs, about four fifth of participants agreed it was helpful to plan self-directed practice after the instruction, and meet their needs of repetitive learning and/or practice at their leisure time. (3) learning effectiveness, about two third agreed it was helpful to reduce pre-exam anxiety, and improve their test scores. (4) course objects, about three fourth agreed that it was helpful to achieve the goal of ‘executing the complete Physical Assessment procedures with proper skills’. (5) finally, learning reflection, about all of participants agreed this experience of online instructing, learning, and practicing is beneficial to them, they recommend instructor to share with other nursing majors, and they will recommend it to fellow students too. Conclusions: Live webcasting is a low-cost, convenient, efficient and interactive resource to facilitate nursing majors’ motivation of learning, need of self-directed learning and practice, outcome of learning. When live webcasting is integrated into nursing teaching, it provides an opportunity of self-directed learning to promote learning effectiveness, as such to fulfill the teaching objective.Keywords: innovative teaching, learning effectiveness, live webcasting, physical assessment technique
Procedia PDF Downloads 13222 Menstruating Bodies and Social Control – Insights From Dignity Without Danger: Collaboratively Analysing Menstrual Stigma and Taboos in Nepal
Authors: Sara Parker, Kay Standing
Abstract:
This paper will share insights into how menstruators bodies in Nepal are viewed and controlled in Nepal due to the deeply held stigmas and taboos that exist that frame menstrual blood as impure and polluting. It draws on a British Academy Global Challenges Research (BA/GCRF) funded project, ‘Dignity Without Danger,’ that ran from December 2019 to 2022. In Nepal, beliefs and myths around menstrual related practices prevail and vary in accordance to time, generation, caste and class. Physical seclusion and/or restrictions include the consumption of certain foods, the ability to touch certain people and objects, and restricted access to water sources. These restrictions not only put women at risk of poor health outcomes, but they also promote discrimination and challenge fundamental human rights. Despite the pandemic, a wealth of field research and creative outputs have been generated to help break the silence that surrounds menstruation and also highlights the complexity of addressing the harms associated with the exclusion from sacred and profane spaces that menstruators face. Working with locally recruited female research assistants, NGOS and brining together academics from the UK and Nepal, we explore the intersecting factors that impact on menstrual experiences and how they vary throughout Nepal. WE concur with Tamang that there is no such thing as a ‘Nepali Woman’, and there is no one narrative that captures the experiences of menstruators in Nepal. These deeply held beliefs and practices mean that menstruators are denied their right to a dignified menstruation. By being excluded from public and private spaces, such as temples and religious sites, as well as from kitchens and your own bedroom in your own home, these beliefs impact on individuals in complex and interesting ways. Existing research in Nepal by academics and activists demonstrates current programmes and initiatives do not fully address the misconceptions that underpin the exclusionary practices impacting on sexual and reproductive health, a sense of well being and highlight more work is needed in this area. Research has been conducted in all 7 provinces and through exploring and connecting disparate stories, artefacts and narratives, we will deepen understanding of the complexity of menstrual practices enabling local stakeholders to challenge exclusionary practices. By using creative methods to engage with stakeholders and share our research findings as well as highlighting the wealth of activism in Nepal. We highlight the importance of working with local communities, leaders and cutting across disciplines and agencies to promote menstrual justice and dignity. Our research findings and creative outputs that we share on social media channels such as Dignity Without Danger Facebook, Instagram and you tube stress the value of employing a collaborative action research approach to generate material which helps local people take control of their own narrative and change social relations that lead to harmful practices.Keywords: menstruation, Nepal, stigma, social norms
Procedia PDF Downloads 6521 Genetic Polymorphism and Insilico Study Epitope Block 2 MSP1 Gene of Plasmodium falciparum Isolate Endemic Jayapura
Authors: Arsyam Mawardi, Sony Suhandono, Azzania Fibriani, Fifi Fitriyah Masduki
Abstract:
Malaria is an infectious disease caused by Plasmodium sp. This disease has a high prevalence in Indonesia, especially in Jayapura. The vaccine that is currently being developed has not been effective in overcoming malaria. This is due to the high polymorphism in the Plasmodium genome especially in areas that encode Plasmodium surface proteins. Merozoite Surface Protein 1 (MSP1) Plasmodium falciparum is a surface protein that plays a role in the invasion process in human erythrocytes through the interaction of Glycophorin A protein receptors and sialic acid in erythrocytes with Reticulocyte Binding Proteins (RBP) and Duffy Adhesion Protein (DAP) ligands in merozoites. MSP1 can be targeted to be a specific antigen and predicted epitope area which will be used for the development of diagnostic and malaria vaccine therapy. MSP1 consists of 17 blocks, each block is dimorphic, and has been marked as the K1 and MAD20 alleles. Exceptions only in block 2, because it has 3 alleles, among others K1, MAD20 and RO33. These polymorphisms cause allelic variations and implicate the severity of patients infected P. falciparum. In addition, polymorphism of MSP1 in Jayapura isolates has not been reported so it is interesting to be further identified and projected as a specific antigen. Therefore, in this study, we analyzed the allele polymorphism as well as detected the MSP1 epitope antigen candidate on block 2 P. falciparum. Clinical samples of selected malaria patients followed the consecutive sampling method, examining malaria parasites with blood preparations on glass objects observed through a microscope. Plasmodium DNA was isolated from the blood of malarial positive patients. The block 2 MSP1 gene was amplified using PCR method and cloned using the pGEM-T easy vector then transformed to TOP'10 E.coli. Positive colonies selection was performed with blue-white screening. The existence of target DNA was confirmed by PCR colonies and DNA sequencing methods. Furthermore, DNA sequence analysis was done through alignment and formation of a phylogenetic tree using MEGA 6 software and insilico analysis using IEDB software to predict epitope candidate for P. falciparum. A total of 15 patient samples have been isolated from Plasmodium DNA. PCR amplification results show the target gene size about ± 1049 bp. The results of MSP1 nucleotide alignment analysis reveal that block 2 MSP1 genes derived from the sample of malarial patients were distributed in four different allele family groups, K1 (7), MAD20 (1), RO33 (0) and MSP1_Jayapura (10) alleles. The most commonly appears of the detected allele is MSP1_Jayapura single allele. There was no significant association between sex variables, age, the density of parasitemia and alel variation (Mann Whitney, U > 0.05), while symptomatic signs have a significant difference as a trigger of detectable allele variation (U < 0.05). In this research, insilico study shows that there is a new epitope antigen candidate from the MSP1_Jayapura allele and it is predicted to be recognized by B cells with 17 amino acid lengths in the amino acid sequence 187 to 203.Keywords: epitope candidate, insilico analysis, MSP1 P. falciparum, polymorphism
Procedia PDF Downloads 18020 Multiple Plant-Based Cell Suspension as a Bio-Ink for 3D Bioprinting Applications in Food Technology
Authors: Yusuf Hesham Mohamed
Abstract:
Introduction: Three-dimensional printing technology includes multiple procedures that fabricate three-dimensional objects through consecutively layering two-dimensional cross-sections on top of each other. 3D bioprinting is a promising field of 3D printing, which fabricates tissues and organs by accurately controlling the proper arrangement of diverse biological components. 3D bioprinting uses software and prints biological materials and their supporting components layer-by-layer on a substrate or in a tissue culture plate to produce complex live tissues and organs. 3D food printing is an emerging field of 3D bioprinting in which the 3D printed products are food products that are cheap, require less effort to produce, and have more desirable traits. The Aim of the Study is the development of an affordable 3D bioprinter by altering a locally made CNC instrument with an open-source platform to suit the 3D bio-printer purposes. Later, we went through applying the prototype in several applications regarding food technology and drug testing, including the organ-On-Chip. Materials and Methods: An off-the-shelf 3D printer was modified by designing and fabricating the syringe unit, which was designed on the basis of the Milli-fluidics system. Sodium alginate and gelatin hydrogels were prepared, followed by leaf cell suspension preparation from narrow sections of Fragaria’s viable leaves. The desired 3D structure was modeled, and 3D printing preparations took place. Cell-free and cell-laden hydrogels were printed at room temperature under sterile conditions. Post printing curing process was performed. The printed structure was further studied. Results: Positive results have been achieved using the altered 3D bioprinter where a 3D hydrogel construct of two layers made of the combination of sodium alginate to gelatin (15%: 0.5%) has been printed. DLP 3D printer was used to design the syringe component with a transparent PLA-Pro resin for the creation of a microfluidics system having two channels altered to the double extruder. The hydrogel extruder’s design was based on peristaltic pumps, which utilized a stepper motor. The design and fabrication were made using DIY-3D printed parts. Hard plastic PLA was the material utilized for printing. SEM was used to carry out the porous 3D construct imaging. Multiple physical and chemical tests were performed in order to ensure that the cell line was suitable for hosting. Fragaria plant was developed by suspending Fragaria’s cells from its leaves using the 3D bioprinter. Conclusion: 3D bioprinting is considered to be an emerging scientific field that can facilitate and improve many scientific tests and studies. Thus, having a 3D bioprinter in labs is considered to be an essential requirement. 3D bioprinters are very expensive; however, the fabrication of a 3D printer into a 3D bioprinter can lower the cost of the bioprinter. The 3D bioprinter implemented made use of peristaltic pumps instead of syringe-based pumps in order to extend the ability to print multiple types of materials and cells.Keywords: scaffold, eco on chip, 3D bioprinter, DLP printer
Procedia PDF Downloads 11919 Artificial Intelligence for Traffic Signal Control and Data Collection
Authors: Reggie Chandra
Abstract:
Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal
Procedia PDF Downloads 16918 Accessing Motional Quotient for All Round Development
Authors: Zongping Wang, Chengjun Cui, Jiacun Wang
Abstract:
The concept of intelligence has been widely used to access an individual's cognitive abilities to learn, form concepts, understand, apply logic, and reason. According to the multiple intelligence theory, there are eight distinguished types of intelligence. One of them is the bodily-kinaesthetic intelligence that links to the capacity of an individual controlling his body and working with objects. Motor intelligence, on the other hand, reflects the capacity to understand, perceive and solve functional problems by motor behavior. Both bodily-kinaesthetic intelligence and motor intelligence refer directly or indirectly to bodily capacity. Inspired by these two intelligence concepts, this paper introduces motional intelligence (MI). MI is two-fold. (1) Body strength, which is the capacity of various organ functions manifested by muscle activity under the control of the central nervous system during physical exercises. It can be measured by the magnitude of muscle contraction force, the frequency of repeating a movement, the time to finish a movement of body position, the duration to maintain muscles in a working status, etc. Body strength reflects the objective of MI. (2) Level of psychiatric willingness to physical events. It is a subjective thing and determined by an individual’s self-consciousness to physical events and resistance to fatigue. As such, we call it subjective MI. Subjective MI can be improved through education and proper social events. The improvement of subjective MI can lead to that of objective MI. A quantitative score of an individual’s MI is motional quotient (MQ). MQ is affected by several factors, including genetics, physical training, diet and lifestyle, family and social environment, and personal awareness of the importance of physical exercise. Genes determine one’s body strength potential. Physical training, in general, makes people stronger, faster and swifter. Diet and lifestyle have a direct impact on health. Family and social environment largely affect one’s passion for physical activities, so does personal awareness of the importance of physical exercise. The key to the success of the MQ study is developing an acceptable and efficient system that can be used to assess MQ objectively and quantitatively. We should apply different accessing systems to different groups of people according to their ages and genders. Field test, laboratory test and questionnaire are among essential components of MQ assessment. A scientific interpretation of MQ score is part of an MQ assessment system as it will help an individual to improve his MQ. IQ (intelligence quotient) and EQ (emotional quotient) and their test have been studied intensively. We argue that IQ and EQ study alone is not sufficient for an individual’s all round development. The significance of MQ study is that it offsets IQ and EQ study. MQ reflects an individual’s mental level as well as bodily level of intelligence in physical activities. It is well-known that the American Springfield College seal includes the Luther Gulick triangle with the words “spirit,” “mind,” and “body” written within it. MQ, together with IQ and EQ, echoes this education philosophy. Since its inception in 2012, the MQ research has spread rapidly in China. By now, six prestigious universities in China have established research centers on MQ and its assessment.Keywords: motional Intelligence, motional quotient, multiple intelligence, motor intelligence, all round development
Procedia PDF Downloads 16217 Exposing The Invisible
Authors: Kimberley Adamek
Abstract:
According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.Keywords: wind engineering, climate, visualization, architectural aerodynamics
Procedia PDF Downloads 35816 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface
Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis
Abstract:
In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.Keywords: polymorphic, locomotion, sinusoidal curves, parametric
Procedia PDF Downloads 10515 The Socio-Economic Impact of the English Leather Glove Industry from the 17th Century to Its Recent Decline
Authors: Frances Turner
Abstract:
Gloves are significant physical objects, being one of the oldest forms of dress. Glove culture is part of every facet of life; its extraordinary history encompasses practicality, and symbolism reflecting a wide range of social practices. The survival of not only the gloves but associated articles enables the possibility to analyse real lives, however so far this area has been largely neglected. Limited information is available to students, researchers, or those involved with the design and making of gloves. There are several museums and independent collectors in England that hold collections of gloves (some from as early as 16th century), machinery, tools, designs and patterns, marketing materials and significant archives which demonstrate the rich heritage of English glove design and manufacturing, being of national significance and worthy of international interest. Through a research glove network which now exists thanks to research grant funding, there is potential for the holders of glove collections to make connections and explore links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English glove industry. The network takes an interdisciplinary approach to bring together interested parties from academia, museums and manufacturing, with expert knowledge of the production, collections, conservation and display of English leather gloves. Academics from diverse arts and humanities disciplines benefit from the opportunities to share research and discuss ideas with network members from non-academic contexts including museums and heritage organisations, industry, and contemporary designers. The fragmented collections when considered in entirety provide an overview of English glove making since earliest times and those who wore them. This paper makes connections and explores links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English Glove industry. The following areas are explored: current content and status of the individual museum collections, potential links, sharing of information histories, social and cultural and relationship to history of fashion design, manufacturing and materials, approaches to maintenance and conservation, access to the collections and strategies for future understanding of their national significance. The facilitation of knowledge exchange and exploration of the collections through the network informs organisations’ future strategies for the maintenance, access and conservation of their collections. By involving industry in the network, it is possible to ensure a contemporary perspective on glove-making in addition to the input from heritage partners. The slow fashion movement and awareness of artisan craft and how these can be preserved and adopted for glove and accessory design is addressed. Artisan leather glove making was a skilled and significant industry in England that has now declined to the point where there is little production remaining utilising the specialist skills that have hardly changed since earliest times. This heritage will be identified and preserved for future generations of the rich cultural history of gloves may be lost.Keywords: artisan glove-making skills, English leather gloves, glove culture, the glove network
Procedia PDF Downloads 12914 A Constructionist View of Projects, Social Media and Tacit Knowledge in a College Classroom: An Exploratory Study
Authors: John Zanetich
Abstract:
Designing an educational activity that encourages inquiry and collaboration is key to engaging students in meaningful learning. Educational Information and Communications Technology (EICT) plays an important role in facilitating cooperative and collaborative learning in the classroom. The EICT also facilitates students’ learning and development of the critical thinking skills needed to solve real world problems. Projects and activities based on constructivism encourage students to embrace complexity as well as find relevance and joy in their learning. It also enhances the students’ capacity for creative and responsible real-world problem solving. Classroom activities based on constructivism offer students an opportunity to develop the higher–order-thinking skills of defining problems and identifying solutions. Participating in a classroom project is an activity for both acquiring experiential knowledge and applying new knowledge to practical situations. It also provides an opportunity for students to integrate new knowledge into a skill set using reflection. Classroom projects can be developed around a variety of learning objects including social media, knowledge management and learning communities. The construction of meaning through project-based learning is an approach that encourages interaction and problem-solving activities. Projects require active participation, collaboration and interaction to reach the agreed upon outcomes. Projects also serve to externalize the invisible cognitive and social processes taking place in the activity itself and in the student experience. This paper describes a classroom project designed to elicit interactions by helping students to unfreeze existing knowledge, to create new learning experiences, and then refreeze the new knowledge. Since constructivists believe that students construct their own meaning through active engagement and participation as well as interactions with others. knowledge management can be used to guide the exchange of both tacit and explicit knowledge in interpersonal interactions between students and guide the construction of meaning. This paper uses an action research approach to the development of a classroom project and describes the use of technology, social media and the active use of tacit knowledge in the college classroom. In this project, a closed group Facebook page becomes the virtual classroom where interaction is captured and measured using engagement analytics. In the virtual learning community, the principles of knowledge management are used to identify the process and components of the infrastructure of the learning process. The project identifies class member interests and measures student engagement in a learning community by analyzing regular posting on the Facebook page. These posts are used to foster and encourage interactions, reflect a student’s interest and serve as reaction points from which viewers of the post convert the explicit information in the post to implicit knowledge. The data was collected over an academic year and was provided, in part, by the Google analytic reports on Facebook and self-reports of posts by members. The results support the use of active tacit knowledge activities, knowledge management and social media to enhance the student learning experience and help create the knowledge that will be used by students to construct meaning.Keywords: constructivism, knowledge management, tacit knowledge, social media
Procedia PDF Downloads 21513 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients
Authors: Leander Van Neste, Kirk Wojno
Abstract:
The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response
Procedia PDF Downloads 16212 Tip-Enhanced Raman Spectroscopy with Plasmonic Lens Focused Longitudinal Electric Field Excitation
Authors: Mingqian Zhang
Abstract:
Tip-enhanced Raman spectroscopy (TERS) is a scanning probe technique for individual objects and structured surfaces investigation that provides a wealth of enhanced spectral information with nanoscale spatial resolution and high detection sensitivity. It has become a powerful and promising chemical and physical information detection method in the nanometer scale. The TERS technique uses a sharp metallic tip regulated in the near-field of a sample surface, which is illuminated with a certain incident beam meeting the excitation conditions of the wave-vector matching. The local electric field, and, consequently, the Raman scattering, from the sample in the vicinity of the tip apex are both greatly tip-enhanced owning to the excitation of localized surface plasmons and the lightning-rod effect. Typically, a TERS setup is composed of a scanning probe microscope, excitation and collection optical configurations, and a Raman spectroscope. In the illumination configuration, an objective lens or a parabolic mirror is always used as the most important component, in order to focus the incident beam on the tip apex for excitation. In this research, a novel TERS setup was built up by introducing a plasmonic lens to the excitation optics as a focusing device. A plasmonic lens with symmetry breaking semi-annular slits corrugated on gold film was designed for the purpose of generating concentrated sub-wavelength light spots with strong longitudinal electric field. Compared to conventional far-field optical components, the designed plasmonic lens not only focuses an incident beam to a sub-wavelength light spot, but also realizes a strong z-component that dominants the electric field illumination, which is ideal for the excitation of tip-enhancement. Therefore, using a PL in the illumination configuration of TERS contributes to improve the detection sensitivity by both reducing the far-field background and effectively exciting the localized electric field enhancement. The FDTD method was employed to investigate the optical near-field distribution resulting from the light-nanostructure interaction. And the optical field distribution was characterized using an scattering-type scanning near-field optical microscope to demonstrate the focusing performance of the lens. The experimental result is in agreement with the theoretically calculated one. It verifies the focusing performance of the plasmonic lens. The optical field distribution shows a bright elliptic spot in the lens center and several arc-like side-lobes on both sides. After the focusing performance was experimentally verified, the designed plasmonic lens was used as a focusing component in the excitation configuration of TERS setup to concentrate incident energy and generate a longitudinal optical field. A collimated linearly polarized laser beam, with along x-axis polarization, was incident from the bottom glass side on the plasmonic lens. The incident light focused by the plasmonic lens interacted with the silver-coated tip apex and enhanced the Raman signal of the sample locally. The scattered Raman signal was gathered by a parabolic mirror and detected with a Raman spectroscopy. Then, the plasmonic lens based setup was employed to investigate carbon nanotubes and TERS experiment was performed. Experimental results indicate that the Raman signal is considerably enhanced which proves that the novel TERS configuration is feasible and promising.Keywords: longitudinal electric field, plasmonics, raman spectroscopy, tip-enhancement
Procedia PDF Downloads 37311 A Computer-Aided System for Tooth Shade Matching
Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan
Abstract:
Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction
Procedia PDF Downloads 44410 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 729 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health
Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik
Abstract:
Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.Keywords: ecology, morbidity, population, lag time
Procedia PDF Downloads 818 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data
Authors: M. Mueller, M. Kuehn, M. Voelker
Abstract:
In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing
Procedia PDF Downloads 1317 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation
Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya
Abstract:
The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity
Procedia PDF Downloads 1846 RE:SOUNDING a 2000-Year-Old Vietnamese Dong Son Bronze Drum; Artist-Led Collaborations outside the Museum to Challenge the Impasse of Repatriating and Rematriating Cultural Instruments
Authors: H. A. J. Nguyen, V. A. Pham
Abstract:
RE:SOUNDING is an ongoing research project and artwork seeking to return the sound and knowledge of Dong Son bronze drums back to contemporary musicians. Colonial collections of ethnographic instruments are problematic in how they commit acts of conceptual, cultural, and acoustic silencing. The collection (or more honestly), the plagiarism, and pillaging of these instruments have systemically separated them from living and breathing cultures. This includes diasporic communities, who have come to resettle in close proximity - but still have little access - to the museums and galleries that display their cultural objects. Despite recent attempts to 'open up' and 'recognise' the tensions and violence of these ethnographic collections, many museums continue to structurally organize and reproduce knowledge with the same procedural distance and limitations of imperial condescension. Impatient with the slowness of these museums, our diaspora led collaborations participated in the opaque economy of the auction market to gain access and begin the process of digitally recording and archiving the actual sounds of the ancient Dong Son drum. This self-directed, self-initiated artwork not only acoustically reinvigorated an ancient instrument but redistributed these sonic materials back to contemporary musicians, composers, and their diasporic communities throughout Vietnam, South East Asia, and Australia. Our methodologies not only highlight the persistent inflexibility of museum infrastructures but demand that museums refrain from their paternalistic practice of risk-averse ownership, to seriously engage with new technologies and political formations that require all public institutions to be held accountable for the ethical and intellectual viability of their colonial collections. The integrated and practical resolve of diasporic artists and their communities are more than capable of working with new technologies to reclaim and reinvigorate what is culturally and spiritually theirs. The motivation to rematriate – as opposed to merely repatriate – the acoustic legacies of these instruments to contemporary musicians and artists is a new model for decolonial and restorative practices. Exposing the inadequacies of western scholarship that continues to treat these instruments as discreet, disembodied, and detached artifacts, these collaborative strategies have thus far produced a wealth of new knowledge – new to the west perhaps – but not that new to these, our own communities. This includes the little-acknowledged fact that the Dong Son drum were political instruments of war and technology, rather than their simplistic description in the museum and western academia as agrarian instruments of fertility and harvest. Through the collective and continued sharing of knowledge and sound materials produced from this research, these drums are gaining a contemporary relevance beyond the cultural silencing of the museum display cabinet. Acknowledgement: We acknowledge the Wurundjeri and Boon Wurrung of the Kulin Nation and the Gadigal of the Eora Nation where we began this project. We pay our respects to the Peoples, Lands, Traditional Custodians, Practices, and Creator Ancestors of these Great Nations, as well as those First Nations peoples throughout Australia, Vietnam, and Indonesia, where this research continues, and upon whose stolen lands and waterways were never ceded.Keywords: acoustic archaeology, decolonisation, museum collections, rematriation, repatriation, Dong Son, experimental music, digital recording
Procedia PDF Downloads 1515 Posts by Influencers Promoting Water Saving: The Impact of Distance and the Perception of Effectiveness on Behavior
Authors: Sancho-Esper Franco, Rodríguez Sánchez Carla, Sánchez Carolina, Orús-Sanclemente Carlos
Abstract:
Water scarcity is a reality that affects many regions of the world and is aggravated by climate change and population growth. Saving water has become an urgent need to ensure the sustainability of the planet and the survival of many communities, where youth and social networks play a key role in promoting responsible practices and adopting habits that contribute to environmental preservation. This study analyzes the persuasion capacity of messages designed to promote pro-environmental behaviors among youth. Specifically, it studies how the efficacy (effectiveness) of the response (personal response efficacy/effectiveness) and the perception of distance from the source of the message influence the water-saving behavior of the audience. To do so, two communication frameworks are combined. First, the Construal Level Theory, which is based on the concept of "psychological distance", that is, people, objects or events can be perceived as psychologically near or far, and this subjective distance (i.e., social, temporal, or spatial) determines their attitudes, emotions, and actions. This perceived distance can be social, temporal, or spatial. This research focuses on studying the spatial distance and social distance generated by cultural differences between influencers and their audience to understand how cultural distance can influence the persuasiveness of a message. Research on the effects of psychological distance between influencers-followers in the pro-environmental field is very limited, being relevant because people could learn specific behaviors suggested by opinion leaders such as influencers in social networks. Second, different approaches to behavioral change suggest that the perceived efficacy of a behavior can explain individual pro-environmental actions. People will be more likely to adopt a new behavior if they perceive that they are capable of performing it (efficacy belief) and that their behavior will effectively contribute to solving that problem (personal response efficacy). It is also important to study the different actors (social and individual) that are perceived as responsible for addressing environmental problems. Specifically, we analyze to what extent the belief individual’s water-saving actions are effective in solving the problem can influence water-saving behavior since this individual effectiveness increases people's sense of obligation and responsibility with the problem. However, in this regard, empirical evidence presents mixed results. Our study addresses the call for experimental studies manipulating different subtypes of response effectiveness to generate robust causal evidence. Based on all the above, this research analyzes whether cultural distance (local vs. international influencer) and the perception of effectiveness of behavior (personal response efficacy) (personal/individual vs. collective) affect the actual behavior and the intention to conserve water of social network users. An experiment of 2 (local influencer vs. international influencer) x 2 (effectiveness of individual vs. collective response) is designed and estimated. The results show that a message from a local influencer appealing to individual responsibility exerts greater influence on intention and actual water-saving behavior, given the cultural closeness between influencer-follower, and the appeal to individual responsibility increases the feeling of obligation to participate in pro-environmental actions. These results offer important implications for social marketing campaigns that seek to promote water conservation.Keywords: social marketing, influencer, message framing, experiment, personal response efficacy, water saving
Procedia PDF Downloads 624 Trajectory Optimization for Autonomous Deep Space Missions
Authors: Anne Schattel, Mitja Echim, Christof Büskens
Abstract:
Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.
Procedia PDF Downloads 4123 Braille Lab: A New Design Approach for Social Entrepreneurship and Innovation in Assistive Tools for the Visually Impaired
Authors: Claudio Loconsole, Daniele Leonardis, Antonio Brunetti, Gianpaolo Francesco Trotta, Nicholas Caporusso, Vitoantonio Bevilacqua
Abstract:
Unfortunately, many people still do not have access to communication, with specific regard to reading and writing. Among them, people who are blind or visually impaired, have several difficulties in getting access to the world, compared to the sighted. Indeed, despite technology advancement and cost reduction, nowadays assistive devices are still expensive such as Braille-based input/output systems which enable reading and writing texts (e.g., personal notes, documents). As a consequence, assistive technology affordability is fundamental in supporting the visually impaired in communication, learning, and social inclusion. This, in turn, has serious consequences in terms of equal access to opportunities, freedom of expression, and actual and independent participation to a society designed for the sighted. Moreover, the visually impaired experience difficulties in recognizing objects and interacting with devices in any activities of daily living. It is not a case that Braille indications are commonly reported only on medicine boxes and elevator keypads. Several software applications for the automatic translation of written text into speech (e.g., Text-To-Speech - TTS) enable reading pieces of documents. However, apart from simple tasks, in many circumstances TTS software is not suitable for understanding very complicated pieces of text requiring to dwell more on specific portions (e.g., mathematical formulas or Greek text). In addition, the experience of reading\writing text is completely different both in terms of engagement, and from an educational perspective. Statistics on the employment rate of blind people show that learning to read and write provides the visually impaired with up to 80% more opportunities of finding a job. Especially in higher educational levels, where the ability to digest very complex text is key, accessibility and availability of Braille plays a fundamental role in reducing drop-out rate of the visually impaired, thus affecting the effectiveness of the constitutional right to get access to education. In this context, the Braille Lab project aims at overcoming these social needs by including affordability in designing and developing assistive tools for visually impaired people. In detail, our awarded project focuses on a technology innovation of the operation principle of existing assistive tools for the visually impaired leaving the Human-Machine Interface unchanged. This can result in a significant reduction of the production costs and consequently of tool selling prices, thus representing an important opportunity for social entrepreneurship. The first two assistive tools designed within the Braille Lab project following the proposed approach aims to provide the possibility to personally print documents and handouts and to read texts written in Braille using refreshable Braille display, respectively. The former, named ‘Braille Cartridge’, represents an alternative solution for printing in Braille and consists in the realization of an electronic-controlled dispenser printing (cartridge) which can be integrated within traditional ink-jet printers, in order to leverage the efficiency and cost of the device mechanical structure which are already being used. The latter, named ‘Braille Cursor’, is an innovative Braille display featuring a substantial technology innovation by means of a unique cursor virtualizing Braille cells, thus limiting the number of active pins needed for Braille characters.Keywords: Human rights, social challenges and technology innovations, visually impaired, affordability, assistive tools
Procedia PDF Downloads 2732 Development of Anti-Fouling Surface Features Bioinspired by the Patterned Micro-Textures of the Scophthalmus rhombus (Brill)
Authors: Ivan Maguire, Alan Barrett, Alex Forte, Sandra Kwiatkowska, Rohit Mishra, Jens Ducrèe, Fiona Regan
Abstract:
Biofouling is defined as the gradual accumulation of Biomimetics refers to the use and imitation of principles copied from nature. Biomimetics has found interest across many commercial disciplines. Among many biological objects and their functions, aquatic animals deserve a special attention due to their antimicrobial capabilities resulting from chemical composition, surface topography or other behavioural defences, which can be used as an inspiration for antifouling technology. Marine biofouling has detrimental effects on seagoing vessels, both commercial and leisure, as well as on oceanographic sensors, offshore drilling rigs, and aquaculture installations. Sensor optics, membranes, housings and platforms can become fouled leading to problems with sensor performance and data integrity. While many anti-fouling solutions are currently being investigated as a cost-cutting measure, biofouling settlement may also be prevented by creating a surface that does not satisfy the settlement conditions. Brill (Scophthalmus rhombus) is a small flatfish occurring in marine waters of Mediterranean as well as Norway and Iceland. It inhabits sandy and muddy coastal waters from 5 to 80 meters. Its skin colour changes depending on environment, but generally is brownish with light and dark freckles, with creamy underside. Brill is oval in shape and its flesh is white. The aim of this study is to translate the unique micro-topography of the brill scale, to design marine inspired biomimetic surface coating and test it against a typical fouling organism. Following extensive study of scale topography of the brill fish (Scophthalmus rhombus) and the settlement behaviour of the diatom species Psammodictyon sp. via SEM, two state-of-the-art antifouling surface solutions were designed and investigated; A brill fish scale bioinspired surface pattern platform (BFD), and generic and uniformly-arrayed, circular micropillar platform (MPD), with offsets based on diatom species settlement behaviour. The BFD approach consists of different ~5 μm by ~90 μm Brill-replica patterns, grown to a 5 μm height, in a linear array pattern. The MPD approach utilises hexagonal-packed cylindrical pillars 10.6 μm in diameter, grown to a height of 5 μm, with vertical offset of 15 μm and horizontal offset of 26.6 μm. Photolithography was employed for microstructure growth, with a polydimethylsiloxane (PDMS) chip-based used as a testbed for diatom adhesion on both platforms. Settlement and adhesion tests were performed using this PDMS microfluidic chip through subjugation to centrifugal force via an in-house developed ‘spin-stand’ which features a motor, in combination with a high-resolution camera, for real-time observing diatom release from PDMS material. Diatom adhesion strength can therefore be determined based on the centrifugal force generated at varying rotational speeds. It is hoped that both the replica and bio-inspired solutions will give comparable anti-fouling results to these synthetic surfaces, whilst also assisting in determining whether anti-fouling solutions should predominantly be investigating either fully bioreplica-based, or a bioinspired, synthetically-based design.Keywords: anti-fouling applications, bio-inspired microstructures, centrifugal microfluidics, surface modification
Procedia PDF Downloads 3171 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 137