Search results for: puppet show
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10303

Search results for: puppet show

403 ECE Teachers’ Evolving Pedagogical Documentation in MAFApp: ICT Integration for Collective Online Thinking in Early Childhood Education

Authors: Cynthia Adlerstein-Grimberg, Andrea Bralic-Echeverría

Abstract:

An extensive and controversial research debate discusses pedagogical documentation (PD) within early childhood education (ECE) as integral to ECE teachers' professional development. The literature converges in acknowledging that ICT integration in PD can be fundamental for children's and teachers' collaborative learning by making their processes visible and open to reflection. Controversial issues about PD emerge around ICT integration and the use of multimedia applications and platforms, displacing the physical experience involved in this pedagogical practice. Authors argue that online platforms make PD become a passive device to demonstrate accountability and performance. Furthermore, ICT integration would make educators inform children and families of pedagogical processes, positioning them more as consumers instead of involving them in collective thinking and pedagogical decision-making. This article analyses how pedagogical documentation mediated by a multimedia application (MAFApp) allows for the positive strengthening of an ECE pedagogical online community that thinks collectively about learning environments. In doing so, the paper shows how ICT integration supports ECE teachers' collective online thinking, enabling them to move from the controversial version of online PD, where they only act as informers of children's learning and assume a voyeuristic perspective, towards a collective online thinking that builds professional development and supports pedagogical decision-making about learning environments. This article answers How ECE teachers' pedagogical documentation evolves with ICT integration using the MAFApp multimedia application in a national ECE online community. From a posthumanist stance, this paper draws on an 18-month collaborative ethnographic immersion in Chile's unique public ECE online PD community. It develops a unique case study of an online ECE pedagogical community mediated by a multimedia application called MAFApp. This ECE online community includes 32 Chilean public kindergartens, 45 ECE teachers, and 72 assistants, who produced 534 pedagogical documentation. Fieldwork included 35 in-depth interviews, 13 discussion groups, and the constant comparison method for the PD coding. Findings show ICT integration in PD builds collective online thinking that evolves through four moments of growing complexity: 1) teachernalism of built environments, 2) onlookerism of children's anecdotes in learning environments; 3) storytelling of children's place-making, and 4) empowering pedagogies for co-creating learning environments. ICT integration through the MAFApp multimedia application enabled ECE teachers to build collective online thinking, making pedagogies of place visible and engaging children in co-constructing learning environments. This online PD is a continuous professional learning space for ECE teachers, empowering pedagogies of place. In conclusion, ICT integration into PD progressively empowers pedagogies of place in Chilean public ECE. Strengthening collective online thinking using the MAFApp multimedia application sharply contrasts with some recent PD research findings. ICT integration to PD enabled strong collective online thinking. Doing so makes PD operate as a place of professional development, pedagogical reflective encounters, and experimentation while inhabiting their own learning environments with children.

Keywords: early childhood education, ICT integration, multimedia application, online collective thinking, pedagogical documentation, professional development

Procedia PDF Downloads 36
402 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 495
401 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 56
400 Real-Time Neuroimaging for Rehabilitation of Stroke Patients

Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge

Abstract:

Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).

Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation

Procedia PDF Downloads 363
399 Selfie: Redefining Culture of Narcissism

Authors: Junali Deka

Abstract:

“Pictures speak more than a thousand words”. It is the power of image which can have multiple meanings the way it is read by the viewers. This research article is an outcome of the extensive study of the phenomenon of‘selfie culture’ and dire need of self-constructed virtual identity among youths. In the recent times, there has been a revolutionary change in the concept of photography in terms of both techniques and applications. The popularity of ‘self-portraits’ mainly depend on the temporal space and time created on social networking sites like Facebook, Instagram. With reference to Stuart’s Hall encoding and decoding process, the article studies the behavior of the users who post photographs online. The photographic messages (Roland Barthes) are interpreted differently by different viewers. The notion of ‘self’, ‘self-love and practice of looking (Marita Sturken) and ways of seeing (John Berger) got new definition and dimensional together. After Oscars Night, show host Ellen DeGeneres’s selfie created the most buzz and hype in the social media. The term was judged the word of 2013, and has earned its place in the dictionary. “In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time an Australian origin. The present study was carried to understand the concept of ‘selfie-bug’ and the phenomenon it has created among youth (especially students) at large in developing a pseudo-image of its own. The topic was relevant and gave a platform to discuss about the cultural, psychological and sociological implications of selfie in the age of digital technology. At the first level, content analysis of the primary and secondary sources including newspapers articles and online resources was carried out followed by a small online survey conducted with the help of questionnaire to find out the student’s view on selfie and its social and psychological effects. The newspapers reports and online resources confirmed that selfie is a new trend in the digital media and it has redefined the notion of beauty and self-love. The Facebook and Instagram are the major platforms used to express one-self and creation of virtual identity. The findings clearly reflected the active participation of female students in comparison to male students. The study of the photographs of few selected respondents revealed the difference of attitude and image building among male and female users. The study underlines some basic questions about the desire of reconstruction of identity among young generation, such as - are they becoming culturally narcissist; responsible factors for cultural, social and moral changes in the society, psychological and technological effects caused by Smartphone as well, culminating into a big question mark whether the selfie is a social signifier of identity construction.

Keywords: Culture, Narcissist, Photographs, Selfie

Procedia PDF Downloads 380
398 One Pot Synthesis of Cu–Ni–S/Ni Foam for the Simultaneous Removal and Detection of Norfloxacin

Authors: Xincheng Jiang, Yanyan An, Yaoyao Huang, Wei Ding, Manli Sun, Hong Li, Huaili Zheng

Abstract:

The residual antibiotics in the environment will pose a threat to the environment and human health. Thus, efficient removal and rapid detection of norfloxacin (NOR) in wastewater is very important. The main sources of NOR pollution are the agricultural, pharmaceutical industry and hospital wastewater. The total consumption of NOR in China can reach 5440 tons per year. It is found that neither animals nor humans can totally absorb and metabolize NOR, resulting in the excretion of NOR into the environment. Therefore, residual NOR has been detected in water bodies. The hazards of NOR in wastewater lie in three aspects: (1) the removal capacity of the wastewater treatment plant for NOR is limited (it is reported that the average removal efficiency of NOR in the wastewater treatment plant is only 68%); (2) NOR entering the environment will lead to the emergence of drug-resistant strains; (3) NOR is toxic to many aquatic species. At present, the removal and detection technologies of NOR are applied separately, which leads to a cumbersome operation process. The development of simultaneous adsorption-flocculation removal and FTIR detection of pollutants has three advantages: (1) Adsorption-flocculation technology promotes the detection technology (the enrichment effect on the material surface improves the detection ability); (2) The integration of adsorption-flocculation technology and detection technology reduces the material cost and makes the operation easier; (3) FTIR detection technology endows the water treatment agent with the ability of molecular recognition and semi-quantitative detection for pollutants. Thus, it is of great significance to develop a smart water treatment material with high removal capacity and detection ability for pollutants. This study explored the feasibility of combining NOR removal method with the semi-quantitative detection method. A magnetic Cu-Ni-S/Ni foam was synthesized by in-situ loading Cu-Ni-S nanostructures on the surface of Ni foam. The novelty of this material is the combination of adsorption-flocculation technology and semi-quantitative detection technology. Batch experiments showed that Cu-Ni-S/Ni foam has a high removal rate of NOR (96.92%), wide pH adaptability (pH=4.0-10.0) and strong ion interference resistance (0.1-100 mmol/L). According to the Langmuir fitting model, the removal capacity can reach 417.4 mg/g at 25 °C, which is much higher than that of other water treatment agents reported in most studies. Characterization analysis indicated that the main removal mechanisms are surface complexation, cation bridging, electrostatic attraction, precipitation and flocculation. Transmission FTIR detection experiments showed that NOR on Cu-Ni-S/Ni foam has easily recognizable FTIR fingerprints; the intensity of characteristic peaks roughly reflects the concentration information to some extent. This semi-quantitative detection method has a wide linear range (5-100 mg/L) and a low limit of detection (4.6 mg/L). These results show that Cu-Ni-S/Ni foam has excellent removal performance and semi-quantitative detection ability of NOR molecules. This paper provides a new idea for designing and preparing multi-functional water treatment materials to achieve simultaneous removal and semi-quantitative detection of organic pollutants in water.

Keywords: adsorption-flocculation, antibiotics detection, Cu-Ni-S/Ni foam, norfloxacin

Procedia PDF Downloads 49
397 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 38
396 Temperature-Dependent Post-Mortem Changes in Human Cardiac Troponin-T (cTnT): An Approach in Determining Postmortem Interval

Authors: Sachil Kumar, Anoop Kumar Verma, Wahid Ali, Uma Shankar Singh

Abstract:

Globally approximately 55.3 million people die each year. In the India there were 95 lakh annual deaths in 2013. The number of deaths resulted from homicides, suicides and unintentional injuries in the same period was about 5.7 lakh. The ever-increasing crime rate necessitated the development of methods for determining time since death. An erroneous time of death window can lead investigators down the wrong path or possibly focus a case on an innocent suspect. In this regard a research was carried out by analyzing the temperature dependent degradation of a Cardiac Troponin-T protein (cTnT) in the myocardium postmortem as a marker for time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (in the Department of Forensic Medicine and Toxicology, King George’s Medical University, Lucknow India) after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC), 12 0C, 25 0C and 37 0C for different time periods ((~5, 26, 50, 84, 132, 157, 180, 205, and 230 hours). The cases included were the subjects of road traffic accidents (RTA) without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. The data shows a distinct temporal profile corresponding to the degradation of cTnT by proteases found in cardiac muscle. The disappearance of intact cTnT and the appearance of lower molecular weight bands are easily observed. Western blot data clearly showed the intact protein at 42 kDa, two major (27 kDa, 10kDa) fragments, two additional minor fragments (32 kDa) and formation of low molecular weight fragments as time increases. At 12 0C the intensity of band (intact cTnT) decreased steadily as compared to RT, 25 0C and 37 0C. Overall, both PMI and temperature had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 38 h and at the highest temperature, 37 0C. The combination of high temperature (37 0C) and long Postmortem interval (105.15 hrs) had the most drastic effect on the breakdown of cTnT. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the log of the time postmortem. These plots show a good coefficient of correlation of r = 0.95 (p=0.003) for the regression of the human heart at different temperature conditions. The data presented demonstrates that this technique can provide an extended time range during which Postmortem interval can be more accurately estimated.

Keywords: degradation, postmortem interval, proteolysis, temperature, troponin

Procedia PDF Downloads 362
395 Robots for the Elderly at Home: For Men Only

Authors: Christa Fricke, Sibylle Meyer, Gert G. Wagner

Abstract:

Our research focuses on the question of whether assistive and social robotics could pose a promising strategy to support the independent living of elderly people and potentially relieve relatives of any anxieties. To answer the question of how elderly people perceive the potential of robotics, we analysed the data from the Berlin Aging Study BASE-II (https://www.base2.mpg.de/de) (N=1463) and data from the German SYMPARTNER study (http://www.sympartner.de) (N=120) and compared those to a control group made up of people younger than 30 years (BASE II: N=241; SYMPARTNER: N=30). BASE-II is a cohort study of people living in Berlin, Germany. The sample covers more than 2200 cases; a questionnaire on the use and acceptance of assistive and social robots was carried out with a sub-sample of 1463 respondents in 2015. The SYMPARTNER study was done by SIBIS institute of Social Research, Berlin and included a total of 120 persons between the ages of 60 and 87 in Berlin and the rural German federal state of Thuringia. Both studies included a control group of persons between the ages of 20 and 35 (BASE II: N=241; SYMPARTNER: N=30). Additional data, representative for the whole population in Germany, will be surveyed in fall 2017 (Survey “Technikradar” [technology radar] by the National Academy of Science and Engineering). Since this survey is including some identical questions as BASE-II/SYMPARTNER, comparative results can be presented at 20th International Conference on Social Robotics in New York 2018. The complexity of the data gathered in BASE-II and SYMPARTNER, encompassing detailed socio-economic background characteristics as well as personality traits such as the personal attitude to risk taking, locus of control and Big Five, proves highly valuable and beneficial. Results show that participants’ expressions of resentment against robots are comparatively low. Participants’ personality traits play a role, however the effect sizes are small. Only 15 percent of participants received domestic robots with great scepticism. Participants aged older than 70 years expressed greatest rejection of the robotic assistant. The effect sizes however account for only a few percentage points. Overall, participants were surprisingly open to the robot and its usefulness. The analysis also shows that men’s acceptance of the robot is generally greater than that of women (with odds ratios of about 0.6 to 0.7). This applies to both assistive robots in the private household and in care environments. Men expect greater benefits of the robot than women. Women tend to be more sceptical of their technical feasibility than men. Interview results prove our hypothesis that men, in particular of the age group 60+, are more accustomed to delegate household chores to women. A delegation to machines instead of humans, therefore, seems palpable. The answer to the title question of this planned presentation is: social and assistive robots at home robots are not only accepted by men – but by fewer women than men.

Keywords: acceptance, care, gender, household

Procedia PDF Downloads 178
394 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs

Authors: Ignitia Motjolopane

Abstract:

Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.

Keywords: business models, innovation, generative AI, small medium enterprises

Procedia PDF Downloads 46
393 Assessing the Plant Diversity's Quality, Threats and Opportunities for the Support of Sustainable City Development of the City Raipur, India

Authors: Katharina Lapin, Debashis Sanyal

Abstract:

Worldwide urban areas are growing. Urbanization has a great impact on social and economic development and ecosystem services. This global trend of urbanization also has significant impact on habitat and biodiversity. The impact of urbanization on the biodiversity of cities in Europe and North America is well studied, while there is a lack of data from cities in currently fast growing urban areas. Indian cities are expanding. The scientific community and the governmental authorities are facing the ongoing urbanization process as an opportunity for the environment. This case study supports the evaluation of urban biodiversity of the city Raipur in the North-West of India. The aim of this study is to assess the overview of the environmental and ecological implications of urbanization. The collected data and analysis was used to discuss the challenges for the sustainable city development. Vascular plants were chosen as an appropriate indicator for the assessment of local biodiversity changes. On the one hand, the vegetation cover is sensible to anthropogenic influence, and in the other hand, the local species composition is comparable to changes at the regional and national scale, using the plant index of India. Further information of abiotic situation can be gathered with the determination of indicator species. In order to calculate the influence of urbanization on the native plant diversity, the Shannon diversity index H´ was chosen. The Pielou`s pooled quadrate method was used for estimating diversity when a random sample is not expected. It was used to calculate the Pilou´s index of evenness. The estimated species coverage was used for calculating the H´ and J. Pearson correlation was performed to test the relationship between urbanization pattern and plant diversity. Further, a SWOT analysis was used in for analyzing internal and external factors impinging on a decision making process. The city of Raipur (21.25°N 81.63°E) has a population of 1,010,087 inhabitants living in an urban area of 226km², in the district of the Indian state of Chhattisgarh. Within the last decade, the urban area of Raipur increased. The results show that various novel ecosystems exist in the urban area of Raipur. The high amount of native flora is mainly to find at the shore of urban lakes and along the river Karun. These areas of high Biodiversity Index are to protect as urban biodiversity hot spots. The governmental authorities are well informed about the environmental challenges for the sustainable development of the city. Together with the scientific community of the Technical University of Raipur many engineering solutions are discussed for implementation of the future. The case study helped to point out the importance environmental measures that support the ecosystem services of green infrastructure. The fast process of urbanization is difficult to control. Uncontrolled creation of urban housing leads to difficulties in unsustainable use of natural resources. This is the major threat for the urban biodiversity.

Keywords: India, novel ecosystems, plant diversity, urban ecology

Procedia PDF Downloads 253
392 Long Non-Coding RNAs Mediated Regulation of Diabetes in Humanized Mouse

Authors: Md. M. Hossain, Regan Roat, Jenica Christopherson, Colette Free, Zhiguang Guo

Abstract:

Long noncoding RNA (lncRNA) mediated post-transcriptional gene regulation, and their epigenetic landscapes have been shown to be involved in many human diseases. However, their regulation in diabetes through governing islet’s β-cell function and survival needs to be elucidated. Due to the technical and ethical constraints, it is difficult to study their role in β-cell function and survival in human under in vivo condition. In this study, humanized mice have been developed through transplanting human pancreatic islet under the kidney capsule of NOD.SCID mice and induced β-cell death leading to diabetes condition to study lncRNA mediated regulation. For this, human islets from 3 donors (3000 IEQ, purity > 80%) were transplanted under the kidney capsule of STZ induced diabetic NOD.scid mice. After at least 2 weeks of normoglycecemia, lymphocytes from diabetic NOD mice were adoptively transferred and islet grafts were collected once blood glucose reached > 200 mg/dl. RNA from human donor islets, islet grafts from humanized mice with either adoptive lymphocyte transfer (ALT) or PBS control (CTL) were ribodepleted; barcoded fragment libraries were constructed and sequenced on the Ion Proton sequencer. lncRNA expression in isolated human islets, islet grafts from humanized mice with and without induced β-cell death and their regulation in human islets function in vitro under glucose challenge, cytokine mediated inflammation and induced apoptotic condition were investigated. Out of 3155 detected lncRNAs, 299 that highly expressed in islets were found to be significantly downregulated and 224 upregulated in ALT compared to CTL. Most of these are found to be collocated within 5 kb upstream and 1 kb downstream of 788 up- and 624 down-regulated mRNAs. Genomic Regions Enrichment of Annotations Analysis revealed deregulated and collocated genes are related to pancreas endocrine development; insulin synthesis, processing, and secretion; pancreatitis and diabetes. Many of them, that found to be located within enhancer domains for islet specific gene activity, are associated to the deregulation of known islet/βcell specific transcription factors and genes that are important for β-cell differentiation, identity, and function. RNA sequencing analysis revealed aberrant lncRNA expression which is associated to the deregulated mRNAs in β-cell function as well as in molecular pathways related to diabetes. A distinct set of candidate lncRNA isoforms were identified as highly enriched and specific to human islets, which are deregulated in human islets from donors with different BMIs and with type 2 diabetes. These RNAs show an interesting regulation in cultured human islets under glucose stimulation and with induced β-cell death by cytokines. Aberrant expression of these lncRNAs was detected in the exosomes from the media of islets cultured with cytokines. Results of this study suggest that the islet specific lncRNAs are deregulated in human islet with β-cell death, hence important in diabetes. These lncRNAs might be important for human β-cell function and survival thus could be used as biomarkers and novel therapeutic targets for diabetes.

Keywords: β-cell, humanized mouse, pancreatic islet, LncRNAs

Procedia PDF Downloads 138
391 Purpose-Driven Collaborative Strategic Learning

Authors: Mingyan Hong, Shuozhao Hou

Abstract:

Collaborative Strategic Learning (CSL) teaches students to use learning strategies while working cooperatively. Student strategies include the following steps: defining the learning task and purpose; conducting ongoing negotiation of the learning materials by deciding "click" (I get it and I can teach it – green card, I get it –yellow card) or "clunk" (I don't get it – red card) at the end of each learning unit; "getting the gist" of the most important parts of the learning materials; and "wrapping up" key ideas. Find out how to help students of mixed achievement levels apply learning strategies while learning content area in materials in small groups. The design of CSL is based on social-constructivism and Vygotsky’s best-known concept of the Zone of Proximal Development (ZPD). The definition of ZPD is the distance between the actual acquisition level as decided by individual problem solution case and the level of potential acquisition level, similar to Krashen (1980)’s i+1, as decided through the problem-solution case under the facilitator’s guidance, or in group work with other more capable members (Vygotsky, 1978). Vygotsky claimed that learners’ ideal learning environment is in the ZPD. An ideal teacher or more-knowledgable-other (MKO) should be able to recognize a learner’s ZPD and facilitates them to develop beyond it. Then the MKO is able to leave the support step by step until the learner can perform the task without aid. Steven Krashen (1980) proposed Input hypothesis including i+1 hypothesis. The input hypothesis models are the application of ZPD in second language acquisition and have been widely recognized until today. Krashen (2019)’s optimal language learning environment (2019) further developed the application of ZPD and added the component of strategic group learning. The strategic group learning is composed of desirable learning materials learners are motivated to learn and desirable group members who are more capable and are therefore able to offer meaningful input to the learners. Purpose-driven Collaborative Strategic Learning Model is a strategic integration of ZPD, i+1 hypothesis model, and Optimal Language Learning Environment Model. It is purpose driven to ensure group members are motivated. It is collaborative so that an optimal learning environment where meaningful input from meaningful conversation can be generated. It is strategic because facilitators in the model strategically assign each member a meaningful and collaborative role, e.g., team leader, technician, problem solver, appraiser, offer group learning instrument so that the learning process is structured, and integrate group learning and team building making sure holistic development of each participant. Using data collected from college year one and year two students’ English courses, this presentation will demonstrate how purpose-driven collaborative strategic learning model is implemented in the second/foreign language classroom, using the qualitative data from questionnaire and interview. Particular, this presentation will show how second/foreign language learners grow from functioning with facilitator or more capable peer’s aid to performing without aid. The implication of this research is that purpose-driven collaborative strategic learning model can be used not only in language learning, but also in any subject area.

Keywords: collaborative, strategic, optimal input, second language acquisition

Procedia PDF Downloads 105
390 Analyzing the Effects of a Psychological Intervention on Black Students’ Sense of Belonging in Physics and Math: Exploring Differential Impacts for Historically Black Colleges and Universities and Predominantly White Institutions

Authors: Terrell Strayhorn

Abstract:

The lack of diversity in science, technology, engineering, and mathematics (STEM) fields is a persistent and concerning issue. One contributing factor to the underrepresentation of minority groups in STEM fields is a lack of sense of belonging, which can lead to lower levels of academic engagement, motivation, and achievement. In particular, Black students have been shown to experience lower levels of sense of belonging in STEM compared to their white peers. This study aimed to explore the effects of a psychological intervention on Black students' sense of belonging in physics and math courses at historically Black colleges and universities (HBCUs) and predominantly white institutions (PWIs). The study used a randomized controlled trial design and included 305 Black undergraduate students enrolled in physics or math courses at HBCUs and PWIs in the United States. Participants were randomly assigned to either an intervention group or a control group. The intervention consisted of a brief psychological, video-based intervention designed to enhance sense of belonging, which was delivered in a single session. The control group received no intervention. The primary outcome measure was sense of belonging in physics and math courses, as assessed by a validated self-report measure. Other outcomes included academic engagement, motivation, and achievement as measured by physics and math (course) grades. Preliminary results show that the intervention has a significant positive effect on Black students' sense of belonging in physics and math courses, with a moderate effect size. The intervention also had a significant positive effect on academic engagement and motivation, but not on academic achievement. Importantly, the effects of the intervention were larger for Black students enrolled at PWIs compared to those enrolled at HBCUs. Findings, at present, suggest that a brief psychological web-based intervention can enhance Black students' sense of belonging in physics and math courses, and that the effects may be particularly strong for Black students enrolled at PWIs, although they are not negligible for Black students at HBCUs. This is an important finding given the persistent underrepresentation of Black students in STEM fields, the growing number of Black students at PWIs, and the potential for enhancing sense of belonging to improve academic outcomes and increase diversity in these fields. The study has several limitations, including a relatively small sample size and a lack of long-term follow-up. Future research could explore the generalizability of these findings to other minority groups and other STEM fields, as well as the potential for longer-term interventions to sustain and enhance the effects observed in this study. Overall, this study highlights the potential for psychological interventions to enhance sense of belonging and improve academic outcomes for Black students in STEM courses, and underscores the importance of addressing sense of belonging as a key factor in promoting diversity and equity in STEM fields.

Keywords: sense of belonging, achievement, racial equity, postsecondary education, intervention

Procedia PDF Downloads 49
389 Forced Migration and Access to Maternal Healthcare in Internally Displaced Persons Camps in North-Central Nigeria

Authors: Faith O. Olanrewaju

Abstract:

Internal displacement and the vulnerability of women are two critical aspects of forced migration that have dominated both global and local discourses. Statistics show that in November 2021, there were over 2.1 million internally displaced persons (IDPs) in Nigeria. Literature also states that displaced women and girls are more vulnerable than displaced men. They are susceptible to adversative experiences, including various forms of sexual violence and rape. As a result, the displaced women and girls are faced with psychological and physical traumas, including HIV/AIDS as well as unexpected or poorly spaced pregnancies. In addition, the poor condition of living of internally displaced women in IDP camps affects their reproductive health, pregnancy outcomes, and maternal mortality levels. Incontrovertibly, internally displaced women constitute an imperative contributor to the ills of Nigeria's maternal health status, which is the second worse globally and the worst in Africa. World Health Organisation statistics showed that approximately 536,000 girls and women die from pregnancy-related causes globally, and Nigeria accounts for 14% of the global maternal deaths. Undeniably, this supports the claims that maternal mortality remains a challenge in Nigeria and can be exacerbated by internal displacement crises. Therefore, maternal mortality remains a critical impediment to the actualisation of the 3.1 SDG target. Owing to this, concerns arise about the quality of the policy in Nigeria’s health sector. More specifically, this study is concerned with the maternal health care services displaced women receive in IDP camps in the three states affected by internal displacement in north-central Nigeria, an understudied area. The novelty of the study also lies in its comparative investigation of maternal healthcare service delivery in three different camp structures (faith-based, government, and informal IDP camps), a pattern that is absent in literature. Therefore, this study will investigate how the camp structures affect access to maternal health services in the study areas; analyse the successes and challenges in the delivery of maternal health care services to displaced women in the various camps; and recommendation and strategies for reducing maternal healthcare disparities/gaps across IDP camps in Nigeria (should they exist). It will adopt a mixed-method approach and multi-stage sampling technique. A total of 1,152 copies of the study questionnaire will be distributed to displaced pregnant and nursing mothers (PNM); nine focus group discussions will also be held with the displaced PNM; in-depth interviews will be conducted with humanitarian actors, policymakers, and health professionals. The quantitative and qualitative data will be analysed using Statistical Package for Social Science (SPSS) 21.0 and thematic analysis, respectively. The findings of the study will be used to develop a model of care that will address the fragmentations in Nigeria's healthcare system. The findings will also inform the development of best policies and practices in the maternal health of displaced women.

Keywords: forced displacement, internally displaced women, maternal healthcare, maternal mortality

Procedia PDF Downloads 143
388 Reading Comprehension in Profound Deaf Readers

Authors: S. Raghibdoust, E. Kamari

Abstract:

Research show that reduced functional hearing has a detrimental influence on the ability of an individual to establish proper phonological representations of words, since the phonological representations are claimed to mediate the conceptual processing of written words. Word processing efficiency is expected to decrease with a decrease in functional hearing. In other words, it is predicted that hearing individuals would be more capable of word processing than individuals with hearing loss, as their functional hearing works normally. Studies also demonstrate that the quality of the functional hearing affects reading comprehension via its effect on their word processing skills. In other words, better hearing facilitates the development of phonological knowledge, and can promote enhanced strategies for the recognition of written words, which in turn positively affect higher-order processes underlying reading comprehension. The aims of this study were to investigate and compare the effect of deafness on the participants’ abilities to process written words at the lexical and sentence levels through using two online and one offline reading comprehension tests. The performance of a group of 8 deaf male students (ages 8-12) was compared with that of a control group of normal hearing male students. All the participants had normal IQ and visual status, and came from an average socioeconomic background. None were diagnosed with a particular learning or motor disability. The language spoken in the homes of all participants was Persian. Two tests of word processing were developed and presented to the participants using OpenSesame software, in order to measure the speed and accuracy of their performance at the two perceptual and conceptual levels. In the third offline test of reading comprehension which comprised of semantically plausible and semantically implausible subject relative clauses, the participants had to select the correct answer out of two choices. The data derived from the statistical analysis using SPSS software indicated that hearing and deaf participants had a similar word processing performance both in terms of speed and accuracy of their responses. The results also showed that there was no significant difference between the performance of the deaf and hearing participants in comprehending semantically plausible sentences (p > 0/05). However, a significant difference between the performances of the two groups was observed with respect to their comprehension of semantically implausible sentences (p < 0/05). In sum, the findings revealed that the seriously impoverished sentence reading ability characterizing the profound deaf subjects of the present research, exhibited their reliance on reading strategies that are based on insufficient or deviant structural knowledge, in particular in processing semantically implausible sentences, rather than a failure to efficiently process written words at the lexical level. This conclusion, of course, does not mean to say that deaf individuals may never experience deficits at the word processing level, deficits that impede their understanding of written texts. However, as stated in previous researches, it sounds reasonable to assume that the more deaf individuals get familiar with written words, the better they can recognize them, despite having a profound phonological weakness.

Keywords: deafness, reading comprehension, reading strategy, word processing, subject and object relative sentences

Procedia PDF Downloads 310
387 The Use of Social Media Sarcasm as a Response to Media-Coverage of Iran’s Unprecedented Attack on Israel

Authors: Afif J Arabi

Abstract:

On April 15, 2024, Iran announced its unprecedented military attack by sending waves of more than 300 drones and ballistic missiles toward Israel. The Attack lasted approximately five hours and was a widely covered, distributed, and followed media event. Iran’s military action against Israel was a long-awaited action across the Middle East since the early days of the October 7th war on Gaza and after a long history of verbal threats. While people in many Arab countries stayed up past midnight in anticipation of watching the disastrous results of this unprecedented attack, voices on traditional and social media alike started to question the timed public announcement of the attack, which gave Israel at least a two-hour notice to prepare its defenses. When live news coverage started showing that nearly all the drones and missiles were intercepted by Israel – with help from the U.S. and other countries – and no deaths were reported, the social media response to this media event turned toward sarcasm, mockery, irony, and humor. Social media users posted sarcastic pictures, jokes, and comments mocking the Iranian offensive. This research examines this unique media event and the sarcastic response it generated on social media. The study aims to investigate the causes leading to media sarcasm in militarized political conflict, the social function of such generated sarcasm, and the role of social media as a platform for consuming frustration, dissatisfaction, and outrage passively through various media products. The study compares the serious traditional media coverage of the event with the humorous social media response among Arab countries. The research uses an eclectic theoretical approach using framing theory as a paradigm for understanding and investigating communication social functionalism theory in media studies to examine sarcasm. Social functionalism theory is a sociological perspective that views society as a complex system whose parts work together to promote solidarity and stability. In the context of media and sarcasm, this theory would suggest that sarcasm serves specific functions within society, such as reinforcing social norms, providing a means for social critique, or functioning as a safety valve for expressing social tension.; and a qualitative analysis of specific examples including responses of SM commentators to such manifestations of political criticism. The preliminary findings of this study point to a heightened dramatization of the televised event and a widespread belief that this attack was a staged show incongruent with Iran’s official enmity and death threats toward Israel. The social media sarcasm reinforces Arab’s view of Iran and Israel as mutual threats. This belief stems from the complex dynamics, historical context, and regional conflict surrounding these three nations: Iran, Israel, and Arabs.

Keywords: social functionalism, social media sarcasm, Television news framing, live militarized conflict coverage, iran, israel, communication theory

Procedia PDF Downloads 10
386 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal

Authors: Pedro B. Antunes, Paulo J. Ramísio

Abstract:

Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.

Keywords: coastal zones, monitoring, road runoff pollution, salt deposition

Procedia PDF Downloads 214
385 Linear Evolution of Compressible Görtler Vortices Subject to Free-Stream Vortical Disturbances

Authors: Samuele Viaro, Pierre Ricco

Abstract:

Görtler instabilities generate in boundary layers from an unbalance between pressure and centrifugal forces caused by concave surfaces. Their spatial streamwise evolution influences transition to turbulence. It is therefore important to understand even the early stages where perturbations, still small, grow linearly and could be controlled more easily. This work presents a rigorous theoretical framework for compressible flows using the linearized unsteady boundary region equations, where only the streamwise pressure gradient and streamwise diffusion terms are neglected from the full governing equations of fluid motion. Boundary and initial conditions are imposed through an asymptotic analysis in order to account for the interaction of the boundary layer with free-stream turbulence. The resulting parabolic system is discretize with a second-order finite difference scheme. Realistic flow parameters are chosen from wind tunnel studies performed at supersonic and subsonic conditions. The Mach number ranges from 0.5 to 8, with two different radii of curvature, 5 m and 10 m, frequencies up to 2000 Hz, and vortex spanwise wavelengths from 5 mm to 20 mm. The evolution of the perturbation flow is shown through velocity, temperature, pressure profiles relatively close to the leading edge, where non-linear effects can still be neglected, and growth rate. Results show that a global stabilizing effect exists with the increase of Mach number, frequency, spanwise wavenumber and radius of curvature. In particular, at high Mach numbers curvature effects are less pronounced and thermal streaks become stronger than velocity streaks. This increase of temperature perturbations saturates at approximately Mach 4 flows, and is limited in the early stage of growth, near the leading edge. In general, Görtler vortices evolve closer to the surface with respect to a flat plate scenario but their location shifts toward the edge of the boundary layer as the Mach number increases. In fact, a jet-like behavior appears for steady vortices having small spanwise wavelengths (less than 10 mm) at Mach 8, creating a region of unperturbed flow close to the wall. A similar response is also found at the highest frequency considered for a Mach 3 flow. Larger vortices are found to have a higher growth rate but are less influenced by the Mach number. An eigenvalue approach is also employed to study the amplification of the perturbations sufficiently downstream from the leading edge. These eigenvalue results are compared with the ones obtained through the initial value approach with inhomogeneous free-stream boundary conditions. All of the parameters here studied have a significant influence on the evolution of the instabilities for the Görtler problem which is indeed highly dependent on initial conditions.

Keywords: compressible boundary layers, Görtler instabilities, receptivity, turbulence transition

Procedia PDF Downloads 233
384 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 20
383 Role of HIV-Support Groups in Mitigating Adverse Sexual Health Outcomes among HIV Positive Adolescents in Uganda

Authors: Lilian Nantume Wampande

Abstract:

Group-based strategies in the delivery of HIV care have opened up new avenues not only for meaningful participation for HIV positive people but also platforms for deconstruction and reconstruction of knowledge about living with the virus. Yet the contributions of such strategies among patients who live in high risk areas are still not explored. This case study research assessed the impact of HIV support networks on sexual health outcomes of HIV positive out-of-school adolescents residing in fishing islands of Kalangala in Uganda. The study population was out-of-school adolescents living with HIV and their sexual partners (n=269), members of their households (n=80) and their health service providers (n=15). Data were collected via structured interviews, observations and focus group discussions between August 2016 and March 2017. Data was then analyzed inductively to extract key themes related to the approaches and outcomes of the groups’ activities. The study findings indicate that support groups unite HIV positive adolescents in a bid for social renegotiation to achieve change but individual constraints surpass the groups’ intentions. Some adolescents for example reported increased fear which led to failure to cope, sexual violence, self-harm and denial of status as a result of the high expectations placed on them as members of the support groups. Further investigations around this phenomenon show that HIV networks play a monotonous role as information sources for HIV positive out-of-school adolescents which limit their creativity to seek information elsewhere. Results still indicate that HIV adolescent groups recognize the complexity of long-term treatment and stay in care leading to improved immunity for the majority yet; there is still scattered evidence about how effective they are among adolescents at different phases in the disease trajectory. Nevertheless, the primary focus of developing adolescent self-efficacy and coping skills significantly address a range of disclosure difficulties and supports autonomy. Moreover, the peer techniques utilized in addition to the almost homogeneous group characteristics accelerates positive confidence, hope and belongingness. Adolescent HIV-support groups therefore have the capacity to both improve and/or worsen sexual health outcomes for a young adolescent who is out-of-school. Communication interventions that seek to increase awareness about ‘self’ should therefore be emphasized more than just fostering collective action. Such interventions should be sensitive to context and gender. In addition, facilitative support supervision done by close and trusted health care providers, most preferably Village Health Teams (who are often community elected volunteers) would help to follow-up, mentor, encourage and advise this young adolescent in matters involving sexuality and health outcomes. HIV/AIDS prevention programs have extended their efforts beyond individual focus to those that foster collective action, but programs should rekindle interpersonal level strategies to address the complexity of individual behavior.

Keywords: adolescent, HIV, support groups, Uganda

Procedia PDF Downloads 112
382 Bio-Functionalized Silk Nanofibers for Peripheral Nerve Regeneration

Authors: Kayla Belanger, Pascale Vigneron, Guy Schlatter, Bernard Devauchelle, Christophe Egles

Abstract:

A severe injury to a peripheral nerve leads to its degeneration and the loss of sensory and motor function. To this day, there still lacks a more effective alternative to the autograft which has long been considered the gold standard for nerve repair. In order to overcome the numerous drawbacks of the autograft, tissue engineered biomaterials may be effective alternatives. Silk fibroin is a favorable biomaterial due to its many advantageous properties such as its biocompatibility, its biodegradability, and its robust mechanical properties. In this study, bio-mimicking multi-channeled nerve guidance conduits made of aligned nanofibers achieved by electrospinning were functionalized with signaling biomolecules and were tested in vitro and in vivo for nerve regeneration support. Silk fibroin (SF) extracted directly from silkworm cocoons was put in solution at a concentration of 10wt%. Poly(ethylene oxide) (PEO) was added to the resulting SF solution to increase solution viscosity and the following three electrospinning solutions were made: (1) SF/PEO solution, (2) SF/PEO solution with nerve growth factor and ciliary neurotrophic factor, and (3) SF/PEO solution with nerve growth factor and neurotrophin-3. Each of these solutions was electrospun into a multi-layer architecture to obtain mechanically optimized aligned nanofibrous mats. For in vitro studies, aligned fibers were treated to induce β-sheet formation and thoroughly rinsed to eliminate presence of PEO. Each material was tested using rat embryo neuron cultures to evaluate neurite extension and the interaction with bio-functionalized or non-functionalized aligned fibers. For in vivo studies, the mats were rolled into 5mm long multi-, micro-channeled conduits then treated and thoroughly rinsed. The conduits were each subsequently implanted between a severed rat sciatic nerve. The effectiveness of nerve repair over a period of 8 months was extensively evaluated by cross-referencing electrophysiological, histological, and movement analysis results to comprehensively evaluate the progression of nerve repair. In vitro results show a more favorable interaction between growing neurons and bio-functionalized silk fibers compared to pure silk fibers. Neurites can also be seen having extended unidirectionally along the alignment of the nanofibers which confirms a guidance factor for the electrospun material. The in vivo study has produced positive results for the regeneration of the sciatic nerve over the length of the study, showing contrasts between the bio-functionalized material and the non-functionalized material along with comparisons to the experimental control. Nerve regeneration has been evaluated not only by histological analysis, but also by electrophysiological assessment and motion analysis of two separate natural movements. By studying these three components in parallel, the most comprehensive evaluation of nerve repair for the conduit designs can be made which can, therefore, more accurately depict their overall effectiveness. This work was supported by La Région Picardie and FEDER.

Keywords: electrospinning, nerve guidance conduit, peripheral nerve regeneration, silk fibroin

Procedia PDF Downloads 222
381 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 168
380 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz

Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.

Keywords: characterization, dissolution, Efavirenz, solid dispersions

Procedia PDF Downloads 611
379 Fermented Fruit and Vegetable Discard as a Source of Feeding Ingredients and Functional Additives

Authors: Jone Ibarruri, Mikel Manso, Marta Cebrián

Abstract:

A high amount of food is lost or discarded in the World every year. In addition, in the last decades, an increasing demand of new alternative and sustainable sources of proteins and other valuable compounds is being observed in the food and feeding sectors and, therefore, the use of food by-products as nutrients for these purposes sounds very interesting from the environmental and economical point of view. However, the direct use of discarded fruit and vegetables that present, in general, a low protein content is not interesting as feeding ingredient except if they are used as a source of fiber for ruminants. Especially in the case of aquaculture, several alternatives to the use of fish meal and other vegetable protein sources have been extensively explored due to the scarcity of fish stocks and the unsustainability of fishing for these purposes. Fish mortality is also of great concern in this sector as this problem highly reduces their economic feasibility. So, the development of new functional and natural ingredients that could reduce the need for vaccination is also of great interest. In this work, several fermentation tests were developed at lab scale using a selected mixture of fruit and vegetable discards from a wholesale market located in the Basque Country to increase their protein content and also to produce some bioactive extracts that could be used as additives in aquaculture. Fruit and vegetable mixtures (60/40 ww) were centrifugated for humidity reduction and crushed to 2-5 mm particle size. Samples were inoculated with a selected Rhizopus oryzae strain and fermented for 7 days in controlled conditions (humidity between 65 and 75% and 28ºC) in Petri plates (120 mm) by triplicate. Obtained results indicated that the final fermented product presented a twofold protein content (from 13 to 28% d.w). Fermented product was further processed to determine their possible functionality as a feed additive. Extraction tests were carried out to obtain an ethanolic extract (60:40 ethanol: water, v.v) and remaining biomass that also could present applications in food or feed sectors. The extract presented a polyphenol content of about 27 mg GAE/gr d.w with antioxidant activity of 8.4 mg TEAC/g d.w. Remining biomass is mainly composed of fiber (51%), protein (24%) and fat (10%). Extracts also presented antibacterial activity according to the results obtained in Agar Diffusion and to the Minimum Inhibitory Concentration (MIC) tests determined against several food and fish pathogen strains. In vitro, digestibility was also assessed to obtain preliminary information about the expected effect of extraction procedure on fermented product digestibility. First results indicated that remaining biomass after extraction doesn´t seem to improve digestibility in comparison to the initial fermented product. These preliminary results show that fermented fruit and vegetables can be a useful source of functional ingredients for aquaculture applications and a substitute of other protein sources in the feeding sector. Further validation will be also carried out through “in vivo” tests with trout and bass.

Keywords: fungal solid state fermentation, protein increase, functional extracts, feed ingredients

Procedia PDF Downloads 44
378 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 343
377 A Comparative Study on the Influencing Factors of Urban Residential Land Prices Among Regions

Authors: Guo Bingkun

Abstract:

With the rapid development of China's social economy and the continuous improvement of urbanization level, people's living standards have undergone tremendous changes, and more and more people are gathering in cities. The demand for urban residents' housing has been greatly released in the past decade. The demand for housing and related construction land required for urban development has brought huge pressure to urban operations, and land prices have also risen rapidly in the short term. On the other hand, from the comparison of the eastern and western regions of China, there are also great differences in urban socioeconomics and land prices in the eastern, central and western regions. Although judging from the current overall market development, after more than ten years of housing market reform and development, the quality of housing and land use efficiency in Chinese cities have been greatly improved. However, the current contradiction between land demand for urban socio-economic development and land supply, especially the contradiction between land supply and demand for urban residential land, has not been effectively alleviated. Since land is closely linked to all aspects of society, changes in land prices will be affected by many complex factors. Therefore, this paper studies the factors that may affect urban residential land prices and compares them among eastern, central and western cities, and finds the main factors that determine the level of urban residential land prices. This paper provides guidance for urban managers in formulating land policies and alleviating land supply and demand. It provides distinct ideas for improving urban planning and improving urban planning and promotes the improvement of urban management level. The research in this paper focuses on residential land prices. Generally, the indicators for measuring land prices mainly include benchmark land prices, land price level values, parcel land prices, etc. However, considering the requirements of research data continuity and representativeness, this paper chooses to use residential land price level values. Reflects the status of urban residential land prices. First of all, based on the existing research at home and abroad, the paper considers the two aspects of land supply and demand and, based on basic theoretical analysis, determines some factors that may affect urban housing, such as urban expansion, taxation, land reserves, population, and land benefits. Factors of land price and correspondingly selected certain representative indicators. Secondly, using conventional econometric analysis methods, we established a model of factors affecting urban residential land prices, quantitatively analyzed the relationship and intensity of influencing factors and residential land prices, and compared the differences in the impact of urban residential land prices between the eastern, central and western regions. Compare similarities. Research results show that the main factors affecting China's urban residential land prices are urban expansion, land use efficiency, taxation, population size, and residents' consumption. Then, the main reason for the difference in residential land prices between the eastern, central and western regions is the differences in urban expansion patterns, industrial structures, urban carrying capacity and real estate development investment.

Keywords: urban housing, urban planning, housing prices, comparative study

Procedia PDF Downloads 22
376 Probing Scientific Literature Metadata in Search for Climate Services in African Cities

Authors: Zohra Mhedhbi, Meheret Gaston, Sinda Haoues-Jouve, Julia Hidalgo, Pierre Mazzega

Abstract:

In the current context of climate change, supporting national and local stakeholders to make climate-smart decisions is necessary but still underdeveloped in many countries. To overcome this problem, the Global Frameworks for Climate Services (GFCS), implemented under the aegis of the United Nations in 2012, has initiated many programs in different countries. The GFCS contributes to the development of Climate Services, an instrument based on the production and transfer of scientific climate knowledge for specific users such as citizens, urban planning actors, or agricultural professionals. As cities concentrate on economic, social and environmental issues that make them more vulnerable to climate change, the New Urban Agenda (NUA), adopted at Habitat III in October 2016, highlights the importance of paying particular attention to disaster risk management, climate and environmental sustainability and urban resilience. In order to support the implementation of the NUA, the World Meteorological Organization (WMO) has identified the urban dimension as one of its priorities and has proposed a new tool, the Integrated Urban Services (IUS), for more sustainable and resilient cities. In the southern countries, there’s a lack of development of climate services, which can be partially explained by problems related to their economic financing. In addition, it is often difficult to make climate change a priority in urban planning, given the more traditional urban challenges these countries face, such as massive poverty, high population growth, etc. Climate services and Integrated Urban Services, particularly in African cities, are expected to contribute to the sustainable development of cities. These tools will help promoting the acquisition of meteorological and socio-ecological data on their transformations, encouraging coordination between national or local institutions providing various sectoral urban services, and should contribute to the achievement of the objectives defined by the United Nations Framework Convention on Climate Change (UNFCCC) or the Paris Agreement, and the Sustainable Development Goals. To assess the state of the art on these various points, the Web of Science metadatabase is queried. With a query combining the keywords "climate*" and "urban*", more than 24,000 articles are identified, source of more than 40,000 distinct keywords (but including synonyms and acronyms) which finely mesh the conceptual field of research. The occurrence of one or more names of the 514 African cities of more than 100,000 inhabitants or countries, reduces this base to a smaller corpus of about 1410 articles (2990 keywords). 41 countries and 136 African cities are cited. The lexicometric analysis of the metadata of the articles and the analysis of the structural indicators (various centralities) of the networks induced by the co-occurrence of expressions related more specifically to climate services show the development potential of these services, identify the gaps which remain to be filled for their implementation and allow to compare the diversity of national and regional situations with regard to these services.

Keywords: African cities, climate change, climate services, integrated urban services, lexicometry, networks, urban planning, web of science

Procedia PDF Downloads 168
375 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers

Authors: B. Neethu, Diptesh Das

Abstract:

The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.

Keywords: bridge, semi active control, sliding mode control, MR damper

Procedia PDF Downloads 109
374 Radiofrequency and Near-Infrared Responsive Core-Shell Multifunctional Nanostructures Using Lipid Templates for Cancer Theranostics

Authors: Animesh Pan, Geoffrey D. Bothun

Abstract:

With the development of nanotechnology, research in multifunctional delivery systems has a new pace and dimension. An incipient challenge is to design an all-in-one delivery system that can be used for multiple purposes, including tumor targeting therapy, radio-frequency (RF-), near-infrared (NIR-), light-, or pH-induced controlled release, photothermal therapy (PTT), photodynamic therapy (PDT), and medical diagnosis. In this regard, various inorganic nanoparticles (NPs) are known to show great potential as the 'functional components' because of their fascinating and tunable physicochemical properties and the possibility of multiple theranostic modalities from individual NPs. Magnetic, luminescent, and plasmonic properties are the three most extensively studied and, more importantly biomedically exploitable properties of inorganic NPs. Although successful attempts of combining any two of them above mentioned functionalities have been made, integrating them in one system has remained challenge. Keeping those in mind, controlled designs of complex colloidal nanoparticle system are one of the most significant challenges in nanoscience and nanotechnology. Therefore, systematic and planned studies providing better revelation are demanded. We report a multifunctional delivery platform-based liposome loaded with drug, iron-oxide magnetic nanoparticles (MNPs), and a gold shell on the surface of liposomes, were synthesized using a lipid with polyelectrolyte (layersomes) templating technique. MNPs and the anti-cancer drug doxorubicin (DOX) were co-encapsulated inside liposomes composed by zwitterionic phophatidylcholine and anionic phosphatidylglycerol using reverse phase evaporation (REV) method. The liposomes were coated with positively charge polyelectrolyte (poly-L-lysine) to enrich the interface with gold anion, exposed to a reducing agent to form a gold nanoshell, and then capped with thio-terminated polyethylene glycol (SH-PEG2000). The core-shell nanostructures were characterized by different techniques like; UV-Vis/NIR scanning spectrophotometer, dynamic light scattering (DLS), transmission electron microscope (TEM). This multifunctional system achieves a variety of functions, such as radiofrequency (RF)-triggered release, chemo-hyperthermia, and NIR laser-triggered for photothermal therapy. Herein, we highlight some of the remaining major design challenges in combination with preliminary studies assessing therapeutic objectives. We demonstrate an efficient loading and delivery system to significant cell death of human cancer cells (A549) with therapeutic capabilities. Coupled with RF and NIR excitation to the doxorubicin-loaded core-shell nanostructure helped in securing targeted and controlled drug release to the cancer cells. The present core-shell multifunctional system with their multimodal imaging and therapeutic capabilities would be eminent candidates for cancer theranostics.

Keywords: cancer thernostics, multifunctional nanostructure, photothermal therapy, radiofrequency targeting

Procedia PDF Downloads 106