Search results for: feature points
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3895

Search results for: feature points

445 The Negative Implications of Childhood Obesity and Malnutrition on Cognitive Development

Authors: Stephanie Remedios, Linda Veronica Rios

Abstract:

Background. Pediatric obesity is a serious health problem linked to multiple physical diseases and ailments, including diabetes, heart disease, and joint issues. While research has shown pediatric obesity can bring about an array of physical illnesses, it is less known how such a condition can affect children’s cognitive development. With childhood overweight and obesity prevalence rates on the rise, it is essential to understand the scope of their cognitive consequences. The present review of the literature tested the hypothesis that poor physical health, such as childhood obesity or malnutrition, negatively impacts a child’s cognitive development. Methodology. A systematic review was conducted to determine the relationship between poor physical health and lower cognitive functioning in children ages 4-16. Electronic databases were searched for studies dating back to ten years. The following databases were used: Science Direct, FIU Libraries, and Google Scholar. Inclusion criteria consisted of peer-reviewed academic articles written in English from 2012 to 2022 that analyzed the relationship between childhood malnutrition and obesity on cognitive development. A total of 17,000 articles were obtained, of which 16,987 were excluded for not addressing the cognitive implications exclusively. Of the acquired articles, 13 were retained. Results. Research suggested a significant connection between diet and cognitive development. Both diet and physical activity are strongly correlated with higher cognitive functioning. Cognitive domains explored in this work included learning, memory, attention, inhibition, and impulsivity. IQ scores were also considered objective representations of overall cognitive performance. Studies showed physical activity benefits cognitive development, primarily for executive functioning and language development. Additionally, children suffering from pediatric obesity or malnutrition were found to score 3-10 points lower in IQ scores when compared to healthy, same-aged children. Conclusion. This review provides evidence that the presence of physical activity and overall physical health, including appropriate diet and nutritional intake, has beneficial effects on cognitive outcomes. The primary conclusion from this research is that childhood obesity and malnutrition show detrimental effects on cognitive development in children, primarily with learning outcomes. Assuming childhood obesity and malnutrition rates continue their current trade, it is essential to understand the complete physical and psychological implications of obesity and malnutrition in pediatric populations. Given the limitations encountered through our research, further studies are needed to evaluate the areas of cognition affected during childhood.

Keywords: childhood malnutrition, childhood obesity, cognitive development, cognitive functioning

Procedia PDF Downloads 117
444 Circular Economy Initiatives in Denmark for the Recycling of Household Plastic Wastes

Authors: Rikke Lybæk

Abstract:

This paper delves into the intricacies of recycling household plastic waste within Denmark, employing an exploratory case study methodology to shed light on the technical, strategic, and market dynamics of the plastic recycling value chain. Focusing on circular economy principles, the research identifies critical gaps and opportunities in recycling processes, particularly regarding plastic packaging waste derived from households, with a notable absence in food packaging reuse initiatives. The study uncovers the predominant practice of downcycling in the current value chain, underscoring a disconnect between the potential for high-quality plastic recycling and the market's readiness to embrace such materials. Through detailed examination of three leading companies in Denmark's plastic industry, the paper highlights the existing support for recycling initiatives, yet points to the necessity of assured quality in sorted plastics to foster broader adoption. The analysis further explores the importance of reuse strategies to complement recycling efforts, aiming to alleviate the pressure on virgin feedstock. The paper ventures into future perspectives, discussing different approaches such as biological degradation methods, watermark technology for plastic traceability, and the potential for bio-based and PtX plastics. These avenues promise not only to enhance recycling efficiency but also to contribute to a more sustainable circular economy by reducing reliance on virgin materials. Despite the challenges outlined, the research demonstrates a burgeoning market for recycled plastics within Denmark, propelled by both environmental considerations and customer demand. However, the study also calls for a more harmonized and effective waste collection and sorting system to elevate the quality and quantity of recyclable plastics. By casting a spotlight on successful case studies and potential technological advancements, the paper advocates for a multifaceted approach to plastic waste management, encompassing not only recycling but also innovative reuse and reduction strategies to foster a more sustainable future. In conclusion, this study underscores the urgent need for innovative, coordinated efforts in the recycling and management of plastic waste to move towards a more sustainable and circular economy in Denmark. It calls for the adoption of comprehensive strategies that include improving recycling technologies, enhancing waste collection systems, and fostering a market environment that values recycled materials, thereby contributing significantly to environmental sustainability goals.

Keywords: case study, circular economy, Denmark, plastic waste, sustainability, waste management

Procedia PDF Downloads 99
443 Rheological Characterization of Polysaccharide Extracted from Camelina Meal as a New Source of Thickening Agent

Authors: Mohammad Anvari, Helen S. Joyner (Melito)

Abstract:

Camelina sativa (L.) Crantz is an oilseed crop currently used for the production of biofuels. However, the low price of diesel and gasoline has made camelina an unprofitable crop for farmers, leading to declining camelina production in the US. Hence, the ability to utilize camelina byproduct (defatted meal) after oil extraction would be a pivotal factor for promoting the economic value of the plant. Camelina defatted meal is rich in proteins and polysaccharides. The great diversity in the polysaccharide structural features provides a unique opportunity for use in food formulations as thickeners, gelling agents, emulsifiers, and stabilizers. There is currently a great degree of interest in the study of novel plant polysaccharides, as they can be derived from readily accessible sources and have potential application in a wide range of food formulations. However, there are no published studies on the polysaccharide extracted from camelina meal, and its potential industrial applications remain largely underexploited. Rheological properties are a key functional feature of polysaccharides and are highly dependent on the material composition and molecular structure. Therefore, the objective of this study was to evaluate the rheological properties of the polysaccharide extracted from camelina meal at different conditions to obtain insight on the molecular characteristics of the polysaccharide. Flow and dynamic mechanical behaviors were determined under different temperatures (5-50°C) and concentrations (1-6% w/v). Additionally, the zeta potential of the polysaccharide dispersion was measured at different pHs (2-11) and a biopolymer concentration of 0.05% (w/v). Shear rate sweep data revealed that the camelina polysaccharide displayed shear thinning (pseudoplastic) behavior, which is typical of polymer systems. The polysaccharide dispersion (1% w/v) showed no significant changes in viscosity with temperature, which makes it a promising ingredient in products requiring texture stability over a range of temperatures. However, the viscosity increased significantly with increased concentration, indicating that camelina polysaccharide can be used in food products at different concentrations to produce a range of textures. Dynamic mechanical spectra showed similar trends. The temperature had little effect on viscoelastic moduli. However, moduli were strongly affected by concentration: samples exhibited concentrated solution behavior at low concentrations (1-2% w/v) and weak gel behavior at higher concentrations (4-6% w/v). These rheological properties can be used for designing and modeling of liquid and semisolid products. Zeta potential affects the intensity of molecular interactions and molecular conformation and can alter solubility, stability, and eventually, the functionality of the materials as their environment changes. In this study, the zeta potential value significantly decreased from 0.0 to -62.5 as pH increased from 2 to 11, indicating that pH may affect the functional properties of the polysaccharide. The results obtained in the current study showed that camelina polysaccharide has significant potential for application in various food systems and can be introduced as a novel anionic thickening agent with unique properties.

Keywords: Camelina meal, polysaccharide, rheology, zeta potential

Procedia PDF Downloads 244
442 Identification of Clinical Characteristics from Persistent Homology Applied to Tumor Imaging

Authors: Eashwar V. Somasundaram, Raoul R. Wadhwa, Jacob G. Scott

Abstract:

The use of radiomics in measuring geometric properties of tumor images such as size, surface area, and volume has been invaluable in assessing cancer diagnosis, treatment, and prognosis. In addition to analyzing geometric properties, radiomics would benefit from measuring topological properties using persistent homology. Intuitively, features uncovered by persistent homology may correlate to tumor structural features. One example is necrotic cavities (corresponding to 2D topological features), which are markers of very aggressive tumors. We develop a data pipeline in R that clusters tumors images based on persistent homology is used to identify meaningful clinical distinctions between tumors and possibly new relationships not captured by established clinical categorizations. A preliminary analysis was performed on 16 Magnetic Resonance Imaging (MRI) breast tissue segments downloaded from the 'Investigation of Serial Studies to Predict Your Therapeutic Response with Imaging and Molecular Analysis' (I-SPY TRIAL or ISPY1) collection in The Cancer Imaging Archive. Each segment represents a patient’s breast tumor prior to treatment. The ISPY1 dataset also provided the estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) status data. A persistent homology matrix up to 2-dimensional features was calculated for each of the MRI segmentation. Wasserstein distances were then calculated between all pairwise tumor image persistent homology matrices to create a distance matrix for each feature dimension. Since Wasserstein distances were calculated for 0, 1, and 2-dimensional features, three hierarchal clusters were constructed. The adjusted Rand Index was used to see how well the clusters corresponded to the ER/PR/HER2 status of the tumors. Triple-negative cancers (negative status for all three receptors) significantly clustered together in the 2-dimensional features dendrogram (Adjusted Rand Index of .35, p = .031). It is known that having a triple-negative breast tumor is associated with aggressive tumor growth and poor prognosis when compared to non-triple negative breast tumors. The aggressive tumor growth associated with triple-negative tumors may have a unique structure in an MRI segmentation, which persistent homology is able to identify. This preliminary analysis shows promising results in the use of persistent homology on tumor imaging to assess the severity of breast tumors. The next step is to apply this pipeline to other tumor segment images from The Cancer Imaging Archive at different sites such as the lung, kidney, and brain. In addition, whether other clinical parameters, such as overall survival, tumor stage, and tumor genotype data are captured well in persistent homology clusters will be assessed. If analyzing tumor MRI segments using persistent homology consistently identifies clinical relationships, this could enable clinicians to use persistent homology data as a noninvasive way to inform clinical decision making in oncology.

Keywords: cancer biology, oncology, persistent homology, radiomics, topological data analysis, tumor imaging

Procedia PDF Downloads 134
441 Effect of Starch and Plasticizer Types and Fiber Content on Properties of Polylactic Acid/Thermoplastic Starch Blend

Authors: Rangrong Yoksan, Amporn Sane, Nattaporn Khanoonkon, Chanakorn Yokesahachart, Narumol Noivoil, Khanh Minh Dang

Abstract:

Polylactic acid (PLA) is the most commercially available bio-based and biodegradable plastic at present. PLA has been used in plastic related industries including single-used containers, disposable and environmentally friendly packaging owing to its renewability, compostability, biodegradability, and safety. Although PLA demonstrates reasonably good optical, physical, mechanical, and barrier properties comparable to the existing petroleum-based plastics, its brittleness and mold shrinkage as well as its price are the points to be concerned for the production of rigid and semi-rigid packaging. Blending PLA with other bio-based polymers including thermoplastic starch (TPS) is an alternative not only to achieve a complete bio-based plastic, but also to reduce the brittleness, shrinkage during molding and production cost of the PLA-based products. TPS is a material produced mainly from starch which is cheap, renewable, biodegradable, compostable, and non-toxic. It is commonly prepared by a plasticization of starch under applying heat and shear force. Although glycerol has been reported as one of the most plasticizers used for preparing TPS, its migration caused the surface stickiness of the TPS products. In some cases, mixed plasticizers or natural fibers have been applied to impede the retrogradation of starch or reduce the migration of glycerol. The introduction of fibers into TPS-based materials could reinforce the polymer matrix as well. Therefore, the objective of the present research is to study the effect of starch type (i.e. native starch and phosphate starch), plasticizer type (i.e. glycerol and xylitol with a weight ratio of glycerol to xylitol of 100:0, 75:25, 50:50, 25:75, and 0:100), and fiber content (i.e. in the range of 1-25 % wt) on properties of PLA/TPS blend and composite. PLA/TPS blends and composites were prepared using a twin-screw extruder and then converted into dumbbell-shaped specimens using an injection molding machine. The PLA/TPS blends prepared by using phosphate starch showed higher tensile strength and stiffness than the blends prepared by using the native one. In contrast, the blends from native starch exhibited higher extensibility and heat distortion temperature (HDT) than those from the modified starch. Increasing xylitol content resulted in enhanced tensile strength, stiffness, and water resistance, but decreased extensibility and HDT of the PLA/TPS blend. Tensile properties and hydrophobicity of the blend could be improved by incorporating silane treated-jute fibers.

Keywords: polylactic acid, thermoplastic starch, Jute fiber, composite, blend

Procedia PDF Downloads 421
440 Torn Between the Lines of Border: The Pakhtuns of Pakistan and Afghanistan in Search of Identity

Authors: Priyanka Dutta Chowdhury

Abstract:

A globalized connected world, calling loud for a composite culture, was still not able to erase the pain of a desired nationalism based on cultural identity. In the South Asian region, the random drawing of the boundaries without taking the ethnic aspect into consideration have always challenged the very basis of the existence of certain groups. The urge to reunify with the fellow brothers on both sides of the border have always called for a chaos and schism in the countries of this region. Sometimes this became a tool to bargain with the state and find a favorable position in the power structure on the basis of cultural identity. In Pakistan and Afghanistan, the Pakhtuns who are divided across the border of the two countries, from the inception of creation of Pakistan have posed various challenges and hampered the growth of a consolidated nation. The Pakhtuns or Pashtuns of both Pakistan and Afghanistan have a strong cultural affinity which blurs their physical distancing and calls for a nationalism based on this ethnic affiliation. Both the sides wanted to create Pakhtunistan unifying all the Pakhtuns of the region. For long, this group have denied to accept the Durand line separating the two. This was an area of concern especially for the Pakhtuns of Pakistan torn between the decision either to join Afghanistan, create a nation of their own or be a part of Pakistan. This ethnic issue became a bone of contention between the two countries. Later, though well absorbed and recognized in the respective countries, they have fought for their identity and claimed for a dominant position in the politics of the nations. Because of the porous borders often influx of refugees was seen especially during Afghan Wars and later many extremists’ groups were born from them especially the Taliban. In the recent string of events, when the Taliban, who are mostly Pakhtuns ethnically, came in power in Afghanistan, a wave of sympathy arose in Pakistan. This gave a strengthening position to the religious Pakhtuns across the border. It is to be noted here that a fragmented Pakhtun identity between the religious and seculars were clearly visible, voicing for their place in the political hierarchy of the country with a vision distinct from each other especially in Pakistan. In this context the paper tries to evaluate the reasons for this cultural turmoil between the countries and this ethnic group. It also aims to analyze the concept of how the identity politics still holds its relevance in the contemporary world. Additionally, the recent trend of fragmented identity points towards instrumentalization of this ethnic groups, who are engaged in the bargaining process with the state for a robust position in the power structure. In the end, the paper aims to deduct from the theoretical conditions of identity politics, whether this is a primordial or a situational tool to have a visibility in the power structure of the contemporary world.

Keywords: cultural identity, identity politics, instrumentalization of identity pakhtuns, power structure

Procedia PDF Downloads 81
439 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 118
438 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring

Procedia PDF Downloads 227
437 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 84
436 Theory of Apokatástasis - „in This Way, While Paying Attention to Their Knowledge and Wisdom, Nonetheless, They Did Not Ask God about These Matters, as to Whether or Not They Are True...“

Authors: Pikria Vardosanidze

Abstract:

The term Apokatástasis (Greek: Apokatástasis) is Greek and means "re-establishment", the universal resurrection. The term dates back to ancient times, in Stoic thought denoting the end of a constantly evolving cycle of the universe and the beginning of a new beginning, established in Christendom by the Eastern Fathers and Origen as the return of the entire created world to a state of goodness. "Universal resurrection" means the resurrection of mankind after the second coming of Jesus Christ. The first thing the Savior will do immediately upon His glorious coming will be that "the dead will be raised up first by Christ." God's animal action will apply to all the dead, but not with the same result. The action of God also applies to the living, which is accomplished by changing their bodies. The degree of glorification of the resurrected body will be commensurate with the spiritual life. An unclean body will not be glorified, and the soul will not be happy. He, as a resurrected body, will be unbelieving, strong, and spiritual, but because of the action of the passions, all this will only bring suffering to the body. The court judges both the soul and the flesh. At the same time, St. The letter nowhere says that at the last 4trial, someone will be able to change their own position. In connection with this dogmatic teaching, one of the greatest fathers of the Church, Sts. Gregory Nossell had a different view. He points out that the miracle of the resurrection is so glorious and sublime that it exceeds our faith. There are two important circumstances: one is the reality of the resurrection itself, and the other is the face of its fulfillment. The first is founded by Gregory Nossell on the Uado authority, Sts. In the letter: Jesus Christ preached about the resurrection of Christ and also foretold many other events, all of which were later fulfilled. Gregory Nossell clarifies the issues of the substantiality of good and evil and the relationship between them and notes that only good has an inherent dependence on nothing because it originated from nothing and exists eternally in God. As for evil, it has no self-sustaining substance and, therefore, no existence. It appears only through the free will of man from time to time. As St., The Father says that God is the supreme goodness that gives beings the power to exist in existence , all others who are without Him are non-existent. St. The above-mentioned opinion of the father about the universal apocatastasis comes from the thought of Origen. This teaching was introduced by the resolution of the Fifth World Ecclesiastical Assembly. Finally, it was unanimously stated by ecclesiastical figures that the doctrine of universal salvation is not valid. For if the resurrection takes place in this way, that is, all beings, including the evil spirit, are resurrected, then the worldly controversy between good and evil, the future common denominator, the eternal torment - all that Christian dogma acknowledges.

Keywords: apolatastasisi ortodox, orthodox doctrine, gregogory of nusse, eschatology

Procedia PDF Downloads 110
435 A Systematic Review of Efficacy and Safety of Radiofrequency Ablation in Patients with Spinal Metastases

Authors: Pascale Brasseur, Binu Gurung, Nicholas Halfpenny, James Eaton

Abstract:

Development of minimally invasive treatments in recent years provides a potential alternative to invasive surgical interventions which are of limited value to patients with spinal metastases due to short life expectancy. A systematic review was conducted to explore the efficacy and safety of radiofrequency ablation (RFA), a minimally invasive treatment in patients with spinal metastases. EMBASE, Medline and CENTRAL were searched from database inception to March 2017 for randomised controlled trials (RCTs) and non-randomised studies. Conference proceedings for ASCO and ESMO published in 2015 and 2016 were also searched. Fourteen studies were included: three prospective interventional studies, four prospective case series and seven retrospective case series. No RCTs or studies comparing RFA with another treatment were identified. RFA was followed by cement augmentation in all patients in seven studies and some patients (40-96%) in the remaining seven studies. Efficacy was assessed as pain relief in 13/14 studies with the use of a numerical rating scale (NRS) or a visual analogue scale (VAS) at various time points. Ten of the 13 studies reported a significant decrease in pain outcome, post-RFA compared to baseline. NRS scores improved significantly at 1 week (5.9 to 3.5, p < 0.0001; 8 to 4.3, p < 0.02 and 8 to 3.9, p < 0.0001) and this improvement was maintained at 1 month post-RFA compared to baseline (5.9 to 2.6, p < 0.0001; 8 to 2.9, p < 0.0003; 8 to 2.9, p < 0.0001). Similarly, VAS scores decreased significantly at 1 week (7.5 to 2.7, p=0.00005; 7.51 to 1.73, p < 0.0001; 7.82 to 2.82, p < 0.001) and this pattern was maintained at 1 month post-RFA compared to baseline (7.51 to 2.25, p < 0.0001; 7.82 to 3.3; p < 0.001). A significant pain relief was achieved regardless of whether patients had cement augmentation in two studies assessing the impact of RFA with or without cement augmentation on VAS pain scores. In these two studies, a significant decrease in pain scores was reported for patients receiving RFA alone and RFA+cement at 1 week (4.3 to 1.7. p=0.0004 and 6.6 to 1.7, p=0.003 respectively) and 15-36 months (7.9 to 4, p=0.008 and 7.6 to 3.5, p=0.005 respectively) after therapy. Few minor complications were reported and these included neural damage, radicular pain, vertebroplasty leakage and lower limb pain/numbness. In conclusion, the efficacy and safety of RFA were consistently positive between prospective and retrospective studies with reductions in pain and few procedural complications. However, the lack of control groups in the identified studies indicates the possibility of selection bias inherent in single arm studies. Controlled trials exploring efficacy and safety of RFA in patients with spinal metastases are warranted to provide robust evidence. The identified studies provide an initial foundation for such future trials.

Keywords: pain relief, radiofrequency ablation, spinal metastases, systematic review

Procedia PDF Downloads 172
434 Migration, Assimilation and Well-Being of Interstate Migrant Workers in Kerala: A Critical Assessment

Authors: Arun Perumbilavil Anand

Abstract:

It may no longer be just anecdotal that every twelfth person in Kerala is a migrant worker from outside the state. For the past few years, the state has been witnessing large inflow of migrants from other states of India, which emerged as a result of demographic transition and Gulf emigration. Initially, the migrants were from the neighbouring states but, at a later period, the state started getting migrants from the distant parts of the country. Currently, migrants have turned to be a decisive force in the state and their increasing numbers have already started creating turbulences in the state. Over the past years, the increasing involvement of migrants in unlawful and criminal activities have generated apprehensions on their presence in the state. Moreover, at present, the Kerala society is not just hosting the first generation migrants, but there has been an increase in the second generation migrants making the situations more complex and diverse. In such a paradigm, the study ponders into the issues of migrants concerning their assimilation and well-being in the host society. Also, the study looks into the factors that impede the assimilation process, along with the perceptions of the migrants about the host society and the people. The study also tries to bring out the differences in the levels of assimilation among the migrants along the lines of religion, caste, state of origin, gender, stay duration and education. Methodology: The study is based on the empirical findings obtained out of the primary survey conducted on migrants employed in the Kanjikode industrial area of Kerala. The samples were selected through purposive sampling and the study employed techniques like observation, questionnaire and in-depth interviews. The findings are based on interviews conducted with 100 migrants. Findings and Conclusion: The study was an attempt of its kind in addressing the issues of assimilation and integration of interstate migrants working in the Kerala. As mentioned, the study could bring out differences in the levels of assimilation along the lines of different characteristics. The study could also locate the importance, and the role played by the peer groups and neighborhoods in accelerating the process of assimilation among the migrants. As an extension, the study also looked at the assimilation and educational issues of the migrant children living in Kerala, and it found that the place of birth, age at entry and the peer group plays a pivotal role in the assimilation process. The study through its findings recommends the need for incorporating the concept of inclusive education into the state educational system by giving due emphasis to the needs of the marginalized. The study points out that owing to the existing demographic conditions, the state will inevitably have to depend on migrant labor in future. Moreover, in such a paradigm, the host community and the government should strive to create a conducive environment for the proper assimilation of the migrants and which in turn can be an impetus for the fulfilment of the needs of both the migrants and the state.

Keywords: assimilation, integration, Kerala, migrant workers, well-being

Procedia PDF Downloads 142
433 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example

Authors: Wang Yang

Abstract:

Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.

Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map

Procedia PDF Downloads 103
432 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks

Procedia PDF Downloads 308
431 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 498
430 Environmental Contamination of Water Bodies by Waste Produced by Slaughterhouses and the Prevalence of Waterborne Diseases in Kumba Municipality

Authors: Maturin Désiré Sop Sop, Didien Njumba Besende, Samuel Fosso Wamba

Abstract:

This study seeks to examine the nexus between drinking water sources in the Kumba municipality and its related health implications vis-à-vis the recurrent incidences of waterborne diseases such as Typhoid, Cholera, Diarrhea, Dysentery, Hepatitis A and malaria. The study adopted a purposive sampling technique in which surveys were conducted between the months of June to December 2022. 150 questionnaires were retrieved from the 210 administered to the affected population of Kosala, Buea Road and Mambanda. Information for the study was collected using surveys, questionnaires, key informant interviews, the laboratory analysis of collected drinking water samples, the researcher’s direct observation as well and hospital reports on the prevalence of waterborne diseases. Water samples from the nearby streams and wells, which were communally used by the local population for drinking, and five slaughterhouses within the affected areas were laboratory tested to determine alterations in their chemical, physical and microbiological characteristics. The collected water samples from all the streams and wells used for drinking were tested for changes in properties such as temperature, turbidity, EC, pH, TDS, TSS, Cl, SO42-, PO43-, NO3-, Fe, Na, BOD, COD, DO, E.coli and total coliform concentration. These results were then compared with the WHO regulations for water quality. The results from the laboratory analysis of drinking water sources, which were at the same time used by the surrounding abattoirs revealed significant alterations in the water quality parameters such as temperature, turbidity, EC, pH, TDS, TSS, Cl, SO42-, PO43-, NO3-, Fe, Na, BOD, COD, DO, E.coli and total coliform concentration. This is due to the channeling of untreated wastes into the different drinking water points as well as the inter-use of dirty utensils such as buckets from slaughterhouses to fetch water from the streams and wells that serve as drinking water sources for the local population. On the human health aspect, the results were later compared with hospital data, and they revealed that the consumption of such contaminated water in the localities of Kosala, Mambanda, and Buea road negatively affected the local population because of the high incidences of Typhoid Cholera, Diarrhea, Dysentery, Hepatitis A and malaria. The poor management of drinking water sources pollutes streams and significantly exposes the local population to lots of waterborne diseases. Efforts should be made to provide clean pipe-borne water to the affected localities of Kumba as well as to ensure the proper management of wastes.

Keywords: drinking water, diseases, Kumba, municipality

Procedia PDF Downloads 76
429 Downward Vertical Evacuation for Disabilities People from Tsunami Using Escape Bunker Technology

Authors: Febrian Tegar Wicaksana, Niqmatul Kurniati, Surya Nandika

Abstract:

Indonesia is one of the countries that have great number of disaster occurrence and threat because it is located in not only between three tectonic plates such as Eurasia plates, Indo-Australia plates and Pacific plates, but also in the Ring of Fire path, like earthquake, Tsunami, volcanic eruption and many more. Recently, research shows that there are potential areas that will be devastated by Tsunami in southern coast of Java. Tsunami is a series of waves in a body of water caused by the displacement of a large volume of water, generally in an ocean. When the waves enter shallow water, they may rise to several feet or, in rare cases, tens of feet, striking the coast with devastating force. The parameter for reference such as magnitude, the depth of epicentre, distance between epicentres with land, the depth of every points, when reached the shore and the growth of waves. Interaction between parameters will bring the big variance of Tsunami wave. Based on that, we can formulate preparation that needed for disaster mitigation strategies. The mitigation strategies will take the important role in an effort to reduce the number of victims and damage in the area. It will reduce the number of victim and casualties. Reducing is directed to the most difficult mobilization casualties in the tsunami disaster area like old people, sick people and disabilities people. Until now, the method that used for rescuing people from Tsunami is basic horizontal evacuation. This evacuation system is not optimal because it needs so long time and it cannot be used by people with disabilities. The writers propose to create a vertical evacuation model with an escape bunker system. This bunker system is chosen because the downward vertical evacuation is considered more efficient and faster. Especially in coastal areas without any highlands surround it. The downward evacuation system is better than upward evacuation because it can avoid the risk of erosion at the ground around the structure which can affect the building. The structure of the bunker and the evacuation process while, and even after, disaster are the main priority to be considered. The power of bunker has quake’s resistance, the durability from water stream, variety of interaction to the ground, and waterproof design. When the situation is back to normal, victim and casualties can go into the safer place. The bunker will be located near the hospital and public places, and will have wide entrance supported by large slide in it so it will ease the disabilities people. The technology of the escape bunker system is expected to reduce the number of victims who have low mobility in the Tsunami.

Keywords: escape bunker, tsunami, vertical evacuation, mitigation, disaster management

Procedia PDF Downloads 492
428 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach

Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista

Abstract:

The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.

Keywords: depth, deep learning, geovisualisation, satellite images

Procedia PDF Downloads 5
427 Blister Formation Mechanisms in Hot Rolling

Authors: Rebecca Dewfall, Mark Coleman, Vladimir Basabe

Abstract:

Oxide scale growth is an inevitable byproduct of the high temperature processing of steel. Blister is a phenomenon that occurs due to oxide growth, where high temperatures result in the swelling of surface scale, producing a bubble-like feature. Blisters can subsequently become embedded in the steel substrate during hot rolling in the finishing mill. This rolled in scale defect causes havoc within industry, not only with wear on machinery but loss of customer satisfaction, poor surface finish, loss of material, and profit. Even though blister is a highly prevalent issue, there is still much that is not known or understood. The classic iron oxidation system is a complex multiphase system formed of wustite, magnetite, and hematite, producing multi-layered scales. Each phase will have independent properties such as thermal coefficients, growth rate, and mechanical properties, etc. Furthermore, each additional alloying element will have different affinities for oxygen and different mobilities in the oxide phases so that oxide morphologies are specific to alloy chemistry. Therefore, blister regimes can be unique to each steel grade resulting in a diverse range of formation mechanisms. Laboratory conditions were selected to simulate industrial hot rolling with temperature ranges approximate to the formation of secondary and tertiary scales in the finishing mills. Samples with composition: 0.15Wt% C, 0.1Wt% Si, 0.86Wt% Mn, 0.036Wt% Al, and 0.028Wt% Cr, were oxidised in a thermo-gravimetric analyser (TGA), with an air velocity of 10litresmin-1, at temperaturesof 800°C, 850°C, 900°C, 1000°C, 1100°C, and 1200°C respectively. Samples were held at temperature in an argon atmosphere for 10minutes, then oxidised in air for 600s, 60s, 30s, 15s, and 4s, respectively. Oxide morphology and Blisters were characterised using EBSD, WDX, nanoindentation, FIB, and FEG-SEM imaging. Blister was found to have both a nucleation and growth process. During nucleation, the scale detaches from the substrate and blisters after a very short period, roughly 10s. The steel substrate is then exposed inside of the blister and further oxidised in the reducing atmosphere of the blister, however, the atmosphere within the blister is highly dependent upon the porosity of the blister crown. The blister crown was found to be consistently between 35-40um for all heating regimes, which supports the theory that the blister inflates, and the oxide then subsequently grows underneath. Upon heating, two modes of blistering were identified. In Mode 1 it was ascertained that the stresses produced by oxide growth will increase with increasing oxide thickness. Therefore, in Mode 1 the incubation time for blister formation is shortened by increasing temperature. In Mode 2 increase in temperature will result in oxide with a high ductility and high oxide porosity. The high oxide ductility and/or porosity accommodates for the intrinsic stresses from oxide growth. Thus Mode 2 is the inverse of Mode 1, and incubation time is increased with temperature. A new phenomenon was reported whereby blister formed exclusively through cooling at elevated temperatures above mode 2.

Keywords: FEG-SEM, nucleation, oxide morphology, surface defect

Procedia PDF Downloads 144
426 Community Singing, a Pathway to Social Capital: A Cross-Cultural Comparative Assessment of the Benefits of Singing Communities in South Tyrol and South Africa

Authors: Johannes Van Der Sandt

Abstract:

This quantitative study investigates different approaches of community singing, in building social capital in South Tyrol, Italy, and South Africa. The impact of the various approaches of community singing is examined by investigating the main components of social capital, namely, social norms and obligations, social networks and associations and trust, and how these components are manifested in two different societies. The research is based on the premise that community singing is an important agent for the development of social capital. It seeks to establish in what form community singing can best enhance the social capital of communities in South Tyrol that are undergoing significant changes in the ways in which social capital is generally being generated on account of demographic, economic, technological and cultural changes. South Tyrol and South Africa share some similarities in the management of their multi-cultural composition. By comparing the different approaches to community singing in two multi-cultural societies, it is hoped to gain insight, and an understanding of the connections between culture, social cohesion, identity and therefore to be able to add to the understanding of the building of social capital through community singing. Participation in music contributes to the growth of social capital in communities, this is amongst others the finding of an ever increasing amount of research. In sociological discourses on social capital generation, the dimension of community music making is recognized as an important factor. Trust and mutual cooperation are products when people listen to each other, when they work or play together, and when they care about each other. This is how social capital develops as an important shared resource. Scholars of Community Music still do not agree on a short and concise definition for Community Music. For the purpose of this research, the author concurs with the definition of Community Music of the Community Music Activity commission of the International Society of Music Education as having the following characteristics: decentralization, accessibility, equal opportunity, and active participation in music-making. These principles are social and political ones, and there can be no doubt that community music activity is more than a purely musical one. Trust, shared norms and values civic and community involvement, networks, knowledge resources, contact with families and friends, and fellowship are key components in fostering group cohesion and social capital development in a community. The research will show that there is no better place for these factors to flourish than in a community singing group. Through this comparative study, it is the aim to identify, analyze and explain similarities and differences in approaches to community across societies that find themselves in a rapid transition from traditional cultural to global cultural habits characterized by a plurality of orientation points, with the aim to gain a better understanding of the various directions South Tyrolean singing culture can take.

Keywords: community music, multicultural, singing, social capital

Procedia PDF Downloads 283
425 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality

Authors: Christian T. Lystbaek

Abstract:

Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.

Keywords: public services innovation management, public value, co-creation, action research

Procedia PDF Downloads 278
424 Development of a Bus Information Web System

Authors: Chiyoung Kim, Jaegeol Yim

Abstract:

Bus service is often either main or the only public transportation available in cities. In metropolitan areas, both subways and buses are available whereas in the medium sized cities buses are usually the only type of public transportation available. Bus Information Systems (BIS) provide current locations of running buses, efficient routes to travel from one place to another, points of interests around a given bus stop, a series of bus stops consisting of a given bus route, and so on to users. Thanks to BIS, people do not have to waste time at a bus stop waiting for a bus because BIS provides exact information on bus arrival times at a given bus stop. Therefore, BIS does a lot to promote the use of buses contributing to pollution reduction and saving natural resources. BIS implementation costs a huge amount of budget as it requires a lot of special equipment such as road side equipment, automatic vehicle identification and location systems, trunked radio systems, and so on. Consequently, medium and small sized cities with a low budget cannot afford to install BIS even though people in these cities need BIS service more desperately than people in metropolitan areas. It is possible to provide BIS service at virtually no cost under the assumption that everybody carries a smartphone and there is at least one person with a smartphone in a running bus who is willing to reveal his/her location details while he/she is sitting in a bus. This assumption is usually true in the real world. The smartphone penetration rate is greater than 100% in the developed countries and there is no reason for a bus driver to refuse to reveal his/her location details while driving. We have developed a mobile app that periodically reads values of sensors including GPS and sends GPS data to the server when the bus stops or when the elapsed time from the last send attempt is greater than a threshold. This app detects the bus stop state by investigating the sensor values. The server that receives GPS data from this app has also been developed. Under the assumption that the current locations of all running buses collected by the mobile app are recorded in a database, we have also developed a web site that provides all kinds of information that most BISs provide to users through the Internet. The development environment is: OS: Windows 7 64bit, IDE: Eclipse Luna 4.4.1, Spring IDE 3.7.0, Database: MySQL 5.1.7, Web Server: Apache Tomcat 7.0, Programming Language: Java 1.7.0_79. Given a start and a destination bus stop, it finds a shortest path from the start to the destination using the Dijkstra algorithm. Then, it finds a convenient route considering number of transits. For the user interface, we use the Google map. Template classes that are used by the Controller, DAO, Service and Utils classes include BUS, BusStop, BusListInfo, BusStopOrder, RouteResult, WalkingDist, Location, and so on. We are now integrating the mobile app system and the web app system.

Keywords: bus information system, GPS, mobile app, web site

Procedia PDF Downloads 216
423 Subjective Realities of Neoliberalized Social Media Natives: Trading Affect for Effect

Authors: Rory Austin Clark

Abstract:

This primary research represents an ongoing two year inductive mixed-methods project endeavouring to unravel the subjective reality of hyperconnected young adults in Western societies who have come of age with social media and smartphones. It is to be presented as well as analyzed and contextualized through a written master’s thesis as well as a documentary/mockumentary meshed with a Web 2.0 app providing the capacity for prosumer, 'audience 2.0' functionality. The media component seeks to explore not only thematic issues via real-life research interviews and fictional narrative but technical issues within the format relating to the quest for intimate, authentic connection as well as compelling dissemination of scholarly knowledge in an age of ubiquitous personalized daily digital media creation and consumption. The overarching hypothesis is that the aforementioned individuals process and make sense of their world, find shared meaning, and formulate notions-of-self in ways drastically different than pre-2007 via hyper-mediation-of-self and surroundings. In this pursuit, research questions have progressed from examining how young adult digital natives understand their use of social media to notions relating to the potential functionality of Web 2.0 for prosocial and altruistic engagement, on and offline, through the eyes of these individuals no longer understood as simply digital natives, but social media natives, and at the conclusion of that phase of research, as 'neoliberalized social media natives' (NSMN). This represents the two most potent macro factors in the paradigmatic shift in NSMS’s worldview, that they are not just children of social media, but of the palpable shift to neoliberal ways of thinking and being in the western socio-cultures since the 1980s, two phenomena that have a reflexive æffective relationship on their perception of figure and ground. This phase also resulted in the working hypothesis of 'social media comparison anxiety' and a nascent understanding of NSMN’s habitus and habitation in a subjective reality of fully converged online/offline worlds, where any phenomena originating in one realm in some way are, or at the very least can be, re-presented or have effect in the other—creating hyperreal reception. This might also be understood through a 'society as symbolic cyborg model', in which individuals have a 'digital essence'-- the entirety of online content that references a single person, as an auric living, breathing cathedral, museum, gallery, and archive of self of infinite permutations and rhizomatic entry and exit points.

Keywords: affect, hyperreal, neoliberalism, postmodernism, social media native, subjective reality, Web 2.0

Procedia PDF Downloads 142
422 Remote Radiation Mapping Based on UAV Formation

Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov

Abstract:

High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.

Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation

Procedia PDF Downloads 97
421 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 107
420 Evaluation of Low-Global Warming Potential Refrigerants in Vapor Compression Heat Pumps

Authors: Hamed Jafargholi

Abstract:

Global warming presents an immense environmental risk, causing detrimental impacts on ecological systems and putting coastal areas at risk. Implementing efficient measures to minimize greenhouse gas emissions and the use of fossil fuels is essential to reducing global warming. Vapor compression heat pumps provide a practical method for harnessing energy from waste heat sources and reducing energy consumption. However, traditional working fluids used in these heat pumps generally contain a significant global warming potential (GWP), which might cause severe greenhouse effects if they are released. The goal of the emphasis on low-GWP (below 150) refrigerants is to further the vapor compression heat pumps. A classification system for vapor compression heat pumps is offered, with different boundaries based on the needed heat temperature and advancements in heat pump technology. A heat pump could be classified as a low temperature heat pump (LTHP), medium temperature heat pump (MTHP), high temperature heat pump (HTHP), or ultra-high temperature heat pump (UHTHP). The HTHP/UHTHP border is 160 °C, the MTHP/HTHP and LTHP/MTHP limits are 100 and 60 °C, respectively. The refrigerant is one of the most important parts of a vapor compression heat pump system. Presently, the main ways to choose a refrigerant are based on ozone depletion potential (ODP) and GWP, with GWP being the lowest possible value and ODP being zero. Pure low-GWP refrigerants, such as natural refrigerants (R718 and R744), hydrocarbons (R290, R600), hydrofluorocarbons (R152a and R161), hydrofluoroolefins (R1234yf, R1234ze(E)), and hydrochlorofluoroolefin (R1233zd(E)), were selected as candidates for vapor compression heat pump systems based on these selection principles. The performance, characteristics, and potential uses of these low-GWP refrigerants in heat pump systems are investigated in this paper. As vapor compression heat pumps with pure low-GWP refrigerants become more common, more and more low-grade heat can be recovered. This means that energy consumption would decrease. The research outputs showed that the refrigerants R718 for UHTHP application, R1233zd(E) for HTHP application, R600, R152a, R161, R1234ze(E) for MTHP, and R744, R290, and R1234yf for LTHP application are appropriate. The selection of an appropriate refrigerant should, in fact, take into consideration two different environmental and thermodynamic points of view. It might be argued that, depending on the situation, a trade-off between these two groups should constantly be considered. The environmental approach is now far stronger than it was previously, according to the European Union regulations. This will promote sustainable energy consumption and social development in addition to assisting in the reduction of greenhouse gas emissions and the management of global warming.

Keywords: vapor compression, global warming potential, heat pumps, greenhouse

Procedia PDF Downloads 32
419 A Geoprocessing Tool for Early Civil Work Notification to Optimize Fiber Optic Cable Installation Cost

Authors: Hussain Adnan Alsalman, Khalid Alhajri, Humoud Alrashidi, Abdulkareem Almakrami, Badie Alguwaisem, Said Alshahrani, Abdullah Alrowaished

Abstract:

Most of the cost of installing a new fiber optic cable is attributed to civil work-trenching-cost. In many cases, information technology departments receive project proposals in their eReview system, but not all projects are visible to everyone. Additionally, if there was no IT scope in the proposed project, it is not likely to be visible to IT. Sometimes it is too late to add IT scope after project budgets have been finalized. Finally, the eReview system is a repository of PDF files for each project, which commits the reviewer to manual work and limits automation potential. This paper details a solution to address the late notification of the eReview system by integrating IT Sites GIS data-sites locations-with land use permit (LUP) data-civil work activity, which is the first step before securing the required land usage authorizations and means no detailed designs for any relevant project before an approved LUP request. To address the manual nature of eReview system, both the LUP System and IT data are using ArcGIS Desktop, which enables the creation of a geoprocessing tool with either Python or Model Builder to automate finding and evaluating potentially usable LUP requests to reduce trenching between two sites in need of a new FOC. To achieve this, a weekly dump was taken from LUP system production data and loaded manually onto ArcMap Desktop. Then a custom tool was developed in model builder, which consisted of a table of two columns containing all the pairs of sites in need of new fiber connectivity. The tool then iterates all rows of this table, taking the sites’ pair one at a time and finding potential LUPs between them, which satisfies the provided search radius. If a group of LUPs was found, an iterator would go through each LUP to find the required civil work between the two sites and the LUP Polyline feature and the distance through the line, which would be counted as cost avoidance if an IT scope had been added. Finally, the tool will export an Excel file named with sites pair, and it will contain as many rows as the number of LUPs, which met the search radius containing trenching and pulling information and cost. As a result, multiple projects have been identified – historical, missed opportunity, and proposed projects. For the proposed project, the savings were about 75% ($750,000) to install a new fiber with the Euclidean distance between Abqaiq GOSP2 and GOSP3 DCOs. In conclusion, the current tool setup identifies opportunities to bundle civil work on single projects at a time and between two sites. More work is needed to allow the bundling of multiple projects between two sites to achieve even more cost avoidance in both capital cost and carbon footprint.

Keywords: GIS, fiber optic cable installation optimization, eliminate redundant civil work, reduce carbon footprint for fiber optic cable installation

Procedia PDF Downloads 217
418 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling

Authors: Vibha Devi, Shabina Khanam

Abstract:

Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.

Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation

Procedia PDF Downloads 139
417 The Impact of Information and Communications Technology (ICT)-Enabled Service Adaptation on Quality of Life: Insights from Taiwan

Authors: Chiahsu Yang, Peiling Wu, Ted Ho

Abstract:

From emphasizing economic development to stressing public happiness, the international community mainly hopes to be able to understand whether the quality of life for the public is becoming better. The Better Life Index (BLI) constructed by OECD uses living conditions and quality of life as starting points to cover 11 areas of life and to convey the state of the general public’s well-being. In light of the BLI framework, the Directorate General of Budget, Accounting and Statistics (DGBAS) of the Executive Yuan instituted the Gross National Happiness Index to understand the needs of the general public and to measure the progress of the aforementioned conditions in residents across the island. Whereas living conditions consist of income and wealth, jobs and earnings, and housing conditions, health status, work and life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security. The ICT area consists of health care, living environment, ICT-enabled communication, transportation, government, education, pleasure, purchasing, job & employment. In the wake of further science and technology development, rapid formation of information societies, and closer integration between lifestyles and information societies, the public’s well-being within information societies has indeed become a noteworthy topic. the Board of Science and Technology of the Executive Yuan use the OECD’s BLI as a reference in the establishment of the Taiwan-specific ICT-Enabled Better Life Index. Using this index, the government plans to examine whether the public’s quality of life is improving as well as measure the public’s satisfaction with current digital quality of life. This understanding will enable the government to gauge the degree of influence and impact that each dimension of digital services has on digital life happiness while also serving as an important reference for promoting digital service development. The content of the ICT Enabled Better Life Index. Information and communications technology (ICT) has been affecting people’s living styles, and further impact people’s quality of life (QoL). Even studies have shown that ICT access and usage have both positive and negative impact on life satisfaction and well-beings, many governments continue to invest in e-government programs to initiate their path to information society. This research is the few attempts to link the e-government benchmark to the subjective well-being perception, and further address the gap between user’s perception and existing hard data assessment, then propose a model to trace measurement results back to the original public policy in order for policy makers to justify their future proposals.

Keywords: information and communications technology, quality of life, satisfaction, well-being

Procedia PDF Downloads 354
416 Evaluation of Regional Anaesthesia Practice in Plastic Surgery: A Retrospective Cross-Sectional Study

Authors: Samar Mousa, Ryan Kerstein, Mohanad Adam

Abstract:

Regional anaesthesia has been associated with favourable outcomes in patients undergoing a wide range of surgeries. Beneficial effects have been demonstrated in terms of postoperative respiratory and cardiovascular endpoints, 7-day survival, time to ambulation and hospital discharge, and postoperative analgesia. Our project aimed at assessing the regional anaesthesia practice in the plastic surgery department of Buckinghamshire trust and finding out ways to improve the service in collaboration with the anaesthesia team. It is a retrospective study associated with a questionnaire filled out by plastic surgeons and anaesthetists to get the full picture behind the numbers. The study period was between 1/3/2022 and 23/5/2022 (12 weeks). The operative notes of all patients who had an operation under plastic surgery, whether emergency or elective, were reviewed. The criteria of suitable candidates for the regional block were put by the consultant anaesthetists as follows: age above 16, single surgical site (arm, forearm, leg, foot), no drug allergy, no pre-existing neuropathy, no bleeding disorders, not on ant-coagulation, no infection to the site of the block. For 12 weeks, 1061 operations were performed by plastic surgeons. Local cases were excluded leaving 319 cases. Of the 319, 102 patients were suitable candidates for regional block after applying the previously mentioned criteria. However, only seven patients had their operations under the regional block, and the rest had general anaesthesia that could have been easily avoided. An online questionnaire was filled out by both plastic surgeons and anaesthetists of different training levels to find out the reasons behind the obvious preference for general over regional anaesthesia, even if this was against the patients’ interest. The questionnaire included the following points: training level, time taken to give GA or RA, factors that influence the decision, percentage of RA candidates that had GA, reasons behind this percentage, recommendations. Forty-four clinicians filled out the questionnaire, among which were 23 plastic surgeons and 21 anaesthetists. As regards the training level, there were 21 consultants, 4 associate specialists, 9 registrars, and 10 senior house officers. The actual percentage of patients who were good candidates for RA but had GA instead is 93%. The replies estimated this percentage as between 10-30%. 29% of the respondents thought that this percentage is because of surgeons’ preference to have GA rather than RA for their operations without medical support for the decision. 37% of the replies thought that anaesthetists prefer giving GA even if the patient is a suitable candidate for RA. 22.6% of the replies thought that patients refused to have RA, and 11.3% had other causes. The recommendations were in 5 main accesses, which are protocols and pathways for regional blocks, more training opportunities for anaesthetists on regional blocks, providing a separate block room in the hospital, better communication between surgeons and anaesthetists, patient education about the benefits of regional blocks.

Keywords: regional anaesthesia, regional block, plastic surgery, general anaesthesia

Procedia PDF Downloads 83