Search results for: retail food risk factor study
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 53242

Search results for: retail food risk factor study

292 Typology of Fake News Dissemination Strategies in Social Networks in Social Events

Authors: Mohadese Oghbaee, Borna Firouzi

Abstract:

The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.

Keywords: fake news, social network, social events, thematic analysis

Procedia PDF Downloads 44
291 A Wasp Parasitoids of Genus Cotesia (Hymenoptera: Braconidae) Naturally Parasitizing Pectinophora gossypiella (Saunders) on Transgenic Cotton in Indian Punjab

Authors: Vijay Kumar, G. K. Grewal, Prasad S. Burange

Abstract:

India is one of the largest cultivators of cotton in the world. Among the various constraints, insect pests are posing a major hurdle to the success of cotton cultivation. Various bollworms, including the pink bollworm, Pectinophora gossypiella (Saunders), cause serious losses in India, China, Pakistan, Egypt, Brazil, tropical America, and Africa, etc. Bt cotton cultivars having Cry genes were introduced in India in 2002 (Cry1Ac) and 2006 (Cry1Ac+ Cry2Ab) for control of American, spotted, and pink bollworms. Pink bollworm (PBW) larvae infest flowers, squares, and bolls. Larva burrows into flowers and bolls to feed on pollen and seeds, respectively. It has a shorter lifecycle and more generations per year, so it develops resistance more quickly than other bollworms. Further, it has cryptic feeding sites, i.e., flowers and bolls/seeds, so it is not exposed to harsh environmental fluctuations and insecticidal applications. The cry toxin concentration is low in its feeding sites, i.e., seeds and flowers of cotton. The use of insecticide and Bt cotton is the primary control measure that has been successful in limiting the damage of PBW. But with the passage of time, it has developed resistance against insecticides and Bt cotton. However, the use of insecticides increases chemical control costs while causing secondary pest problems and environmental pollution. Extensive research has indicated that monitoring and control measures such as biological, cultural, chemical, and host plant resistance methods can be integrated for effective PBW management. The potential of various biological control organisms needs to be explored. The impact of transgenic cotton on non-target organisms, particularly natural enemies, which play an important role in pest control, is still being debated. According to some authors, Bt crops have a negative impact on natural enemies, particularly parasitoids. An experiment was carried out in the Integrated Pest Management Laboratory of the Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India, to study the natural parasitization of PBW on Bt cotton in 2022. A large population of larvae of PBW were kept individually in plastic containers and fed with cotton bolls until the emergence of a parasitoid cocoon. The first cocoon of the parasitoid was observed on October 25, 2022. Symptoms of parasitization were never seen on larvae. Larvae stopped feeding and became inactive before the emergence of parasitoids for pupation. Grub makes its way out of larvae by making a hole in the integument, and immediately after coming out, it spins the cocoon. The adult parasitoid emerged from the cocoon after eight days. The parasitoids that emerged from the cocoon were identified as Cotesia (Braconidae: Hymenoptera) based on the features of the adult. Out of 475 larvae of PBW, 87 were parasitized, with 18.31% of parasitization. Out of these, 6.73% were first instar, 10.52% were second instar, and 1.05% were third instar larvae of PBW. No parasitization was observed in fourth instar larvae. Parasitoids were observed during the fag end of cropping season and mostly on the earlier instars. It is concluded that the potential of Cotesia may be explored as a biological control agent against PBW, which is safer to human beings, environment and non-taraltoget organisms.

Keywords: biocontrol, Bt cotton, Cotesia, Pectinophora gossypiella

Procedia PDF Downloads 60
290 Key Aroma Compounds as Predictors of Pineapple Sensory Quality

Authors: Jenson George, Thoa Nguyen, Garth Sanewski, Craig Hardner, Heather Eunice Smyth

Abstract:

Pineapple (Ananas comosus), with its unique sweet flavour, is one of the most popular tropical, non-climacteric fruits consumed worldwide. It is also the third most important tropical fruit in world production. In Australia, 99% of the pineapple production is from the Queensland state due to the favourable subtropical climatic conditions. The flavourful fruit is known to contain around 500 volatile organic compounds (VOC) at varying concentrations and greatly contribute to the flavour quality of pineapple fruit by providing distinct aroma sensory properties that are sweet, fruity, tropical, pineapple-like, caramel-like, coconut-like, etc. The aroma of pineapple is one of the important factors attracting consumers and strengthening the marketplace. To better understand the aroma of Australian-grown pineapples, the matrix-matched Gas chromatography–mass spectrometry (GC-MS), Head Space - Solid-phase microextraction (HS-SPME), Stable-isotope dilution analysis (SIDA) method was developed and validated. The developed method represents a significant improvement over current methods with the incorporation of multiple external reference standards, multiple isotopes labeled internal standards, and a matching model system of pineapple fruit matrix. This method was employed to quantify 28 key aroma compounds in more than 200 genetically diverse pineapple varieties from a breeding program. The Australian pineapple cultivars varied in content and composition of free volatile compounds, which were predominantly comprised of esters, followed by terpenes, alcohols, aldehydes, and ketones. Using selected commercial cultivars grown in Australia, and by employing the sensorial analysis, the appearance (colour), aroma (intensity, sweet, vinegar/tang, tropical fruits, floral, coconut, green, metallic, vegetal, fresh, peppery, fermented, eggy/sulphurous) and texture (crunchiness, fibrousness, and juiciness) were obtained. Relationships between sensory descriptors and volatiles were explored by applying multivariate analysis (PCA) to the sensorial and chemical data. The key aroma compounds of pineapple exhibited a positive correlation with corresponding sensory properties. The sensory and volatile data were also used to explore genetic diversity in the breeding population. GWAS was employed to unravel the genetic control of the pineapple volatilome and its interplay with fruit sensory characteristics. This study enhances our understanding of pineapple aroma (flavour) compounds, their biosynthetic pathways and expands breeding option for pineapple cultivars. This research provides foundational knowledge to support breeding programs, post-harvest and target market studies, and efforts to optimise the flavour of commercial pineapple varieties and their parent lines to produce better tasting fruits for consumers.

Keywords: Ananas comosus, pineapple, flavour, volatile organic compounds, aroma, Gas chromatography–mass spectrometry (GC-MS), Head Space - Solid-phase microextraction (HS-SPME), Stable-isotope dilution analysis (SIDA).

Procedia PDF Downloads 26
289 Improving Contributions to the Strengthening of the Legislation Regarding Road Infrastructure Safety Management in Romania, Case Study: Comparison Between the Initial Regulations and the Clarity of the Current Regulations - Trends Regarding the Efficiency

Authors: Corneliu-Ioan Dimitriu, Gheorghe Frățilă

Abstract:

Romania and Bulgaria have high rates of road deaths per million inhabitants. Directive (EU) 2019/1936, known as the RISM Directive, has been transposed into national law by each Member State. The research focuses on the amendments made to Romanian legislation through Government Ordinance no. 3/2022, which aims to improve road safety management on infrastructure. The aim of the research is two-fold: to sensitize the Romanian Government and decision-making entities to develop an integrated and competitive management system and to establish a safe and proactive mobility system that ensures efficient and safe roads. The research includes a critical analysis of European and Romanian legislation, as well as subsequent normative acts related to road infrastructure safety management. Public data from European Union and national authorities, as well as data from the Romanian Road Authority-ARR and Traffic Police database, are utilized. The research methodology involves comparative analysis, criterion analysis, SWOT analysis, and the use of GANTT and WBS diagrams. The Excel tool is employed to process the road accident databases of Romania and Bulgaria. Collaboration with Bulgarian specialists is established to identify common road infrastructure safety issues. The research concludes that the legislative changes have resulted in a relaxation of road safety management in Romania, leading to decreased control over certain management procedures. The amendments to primary and secondary legislation do not meet the current safety requirements for road infrastructure. The research highlights the need for legislative changes and strengthened administrative capacity to enhance road safety. Regional cooperation and the exchange of best practices are emphasized for effective road infrastructure safety management. The research contributes to the theoretical understanding of road infrastructure safety management by analyzing legislative changes and their impact on safety measures. It highlights the importance of an integrated and proactive approach in reducing road accidents and achieving the "zero deaths" objective set by the European Union. Data collection involves accessing public data from relevant authorities and using information from the Romanian Road Authority-ARR and Traffic Police database. Analysis procedures include critical analysis of legislation, comparative analysis of transpositions, criterion analysis, and the use of various diagrams and tools such as SWOT, GANTT, WBS, and Excel. The research addresses the effectiveness of legislative changes in road infrastructure safety management in Romania and the impact on control over management procedures. It also explores the need for strengthened administrative capacity and regional cooperation in addressing road safety issues. The research concludes that the legislative changes made in Romania have not strengthened road safety management and emphasize the need for immediate action, legislative amendments, and enhanced administrative capacity. Collaboration with Bulgarian specialists and the exchange of best practices are recommended for effective road infrastructure safety management. The research contributes to the theoretical understanding of road safety management and provides valuable insights for policymakers and decision-makers in Romania.

Keywords: management, road infrastructure safety, legislation, amendments, collaboration

Procedia PDF Downloads 56
288 Development of PCL/Chitosan Core-Shell Electrospun Structures

Authors: Hilal T. Sasmazel, Seda Surucu

Abstract:

Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.

Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold

Procedia PDF Downloads 464
287 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine

Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska

Abstract:

The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: radial engine, ignition system, non-uniformity, combustion process

Procedia PDF Downloads 341
286 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms

Authors: Joy Chadwick

Abstract:

There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.

Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge

Procedia PDF Downloads 53
285 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach

Authors: Dominik Kowitzke

Abstract:

Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.

Keywords: empty nests, environment, Germany, households, over housing

Procedia PDF Downloads 149
284 A Novel Upregulated circ_0032746 on Sponging with MIR4270 Promotes the Proliferation and Migration of Esophageal Squamous Cell Carcinoma

Authors: Sachin Mulmi Shrestha, Xin Fang, Hui Ye, Lihua Ren, Qinghua Ji, Ruihua Shi

Abstract:

Background: Esophageal squamous cell carcinoma (ESCC) is a tumor arising from esophageal epithelial cells and is one of the major disease subtype in Asian countries, including China. Esophageal cancer is the 7th highest incidence based on the 2020 data of GLOBOCAN. The pathogenesis of cancer is still not well understood as many molecular and genetic basis of esophageal carcinogenesis has yet to be clearly elucidated. Circular RNAs are RNA molecules that are formed by back-splicing covalently joined 3′- and 5′-endsrather than canonical splicing, and recent data suggest circular RNAs could sponge miRNAs and are enriched with functional miRNA binding sites. Hence, we studied the mechanism of circular RNA, its biological function, and the relationship between microRNA in the carcinogenesis of ESCC. Methods: 4 pairs of normal and esophageal cancer tissues were collected in Zhongda hospital, affiliated to Southeast University, and high-throughput RNA sequencing was done. The result revealed that circ_0032746 was upregulated, and thus we selected circ_0032746 for further study. The backsplice junction of circRNA was validated by sanger sequence, and stability was determined by RNASE R assay. The binding site of circRNA and microRNA was predicted by circinteractome,mirandaand RNAhybrid database. Furthermore, circRNA was silenced by siRNA and then by lentivirus. The regulatory axis of circ0032746/miR4270 was validated by shRNA, mimic, and inhibitor transfection. Then, in vitro experiments were performed to assess the role of circ0032746 on proliferation (CCK-8 assay and colon formation assay), migration and invasion (Transewell assay), and apoptosis of ESCC. Results: The upregulated circ0032746 was validated in 9 pairs of tissues and 5 types of cell lines by qPCR, which showed high expression and was statistically significant (P<0.005) ). Upregulated circ0032746 was silenced by shRNA, which showed significant knockdown in KYSE 30 and TE-1 cell lines expression compared to control. Nuclear and cytoplasmic mRNA fraction experiment displayed the cytoplasmic location of circ0032746. The sponging of miR4270 was validated by co-transfection of sh-circ0032746 and mimic or inhibitor. Transfection with mimic showed the decreased expression of circ_0032746, whereas inhibitor inhibited the result. In vitro experiments showed that silencing of circ_0032746 inhibited the proliferation, migration, and invasion compared to the negative control group. The apoptosis was seen higher in a knockdown group than in the control group. Furthermore, 11 common mircoRNA target mRNAs were predicted by Targetscan, MirTarbase, and miRanda database, which may further play role in the pathogenesis. Conclusion: Our results showed that novel circ_0032746 is upregulated in ESCC and plays role in itsoncogenicity. Silencing of circ_0032746 inhibits the proliferation and migration of ESCC whereas increases the apoptosis of cancer cells. Hence, circ0032746 acts as an oncogene in ESCC by sponging with miR4270 and could be a potential biomarker in the diagnosis of ESCC in the future.

Keywords: circRNA, esophageal squamous cell carcinoma, microRNA, upregulated

Procedia PDF Downloads 88
283 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment

Authors: Asif Ali, Daud Salim Faruquie

Abstract:

With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.

Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture

Procedia PDF Downloads 385
282 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 138
281 Coming Closer to Communities of Practice through Situated Learning: The Case Study of Polish-English, English-Polish Undergraduate BA Level Language for Specific Purposes of Translation Class

Authors: Marta Lisowska

Abstract:

The growing trend of market specialization imposes upon translators the need for proficiency in the working knowledge of specialist discourse. The notion of specialization differs from a broad general category to a highly specialized narrow field. The specialised discourse is used in the channel of communication based upon distinctive features typical for communities of practice whose co-existence is codified and hermetically locked against outsiders. Consequently, any translator deprived of professional discourse competence and social skills is incapable of providing competent translation product from source language into target language. In this paper, we report on research that explores the pedagogical practices aiming to bridge the dichotomy between the professionals and the specialist translators, while accounting for the reality of the world of professional communities entered by undergraduates on two levels: the text-based generic, and the social one. Drawing from the functional social constructivist approach, seen here as situated learning, this paper reports on the case of English-Polish, Polish-English undergraduate BA Level LSP of law translation class run in line with the simulated classroom-based and the reality-based (apprenticeship) approach. This blended method serves the purpose of introducing the young trainees to the professional world. The research provides new insights into how the LSP translation undergraduates become legitimized through discursive and social participation and engagement. The undergraduates, situated peripherally at the outset, experience their own transformation towards becoming members of these professional groups. With subjective evaluation, the trainees take a stance on this dual mode class and development of their skills. Comparing and contrasting their own work done in line with two models of translation teaching: authentic and near-authentic, the undergraduates answer research questions devised by a questionnaire survey The responses take us closer to how students feel about their LSP translation competence development. The major findings show how the trainees perceive the benefits and hardships of their functional translation class. In terms of skills, they related to communication as the most enhanced one; they highly valued the fact of being ‘exposed’ to a variety of texts (cf. multi literalism), team work, learning how to schedule work, IT skills boost and the ability to learn how to work individually. Another finding indicates that students struggled most with specialized language, and co-working with other students. The short-term research shows the momentum when the undergraduate LSP translation trainees entered the path of transformation i.e. gained consciousness of ‘how it is’ to be a participant-translator of real-life communities of practice, gaining pragmatic dint of the social and linguistic skills understood here as discursive competence (text > genre > discourse > professional practice). The undergraduates need to be aware of the work they have to do and challenges they are to face before arriving at the expert level of professional translation competence.

Keywords: communities of practice in LSP translation teaching, learning LSP translation as situated experience, peripheral participation, professional discourse for LSP translation teaching, professional translation competence

Procedia PDF Downloads 81
280 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP

Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang

Abstract:

Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.

Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species

Procedia PDF Downloads 50
279 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling

Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes

Abstract:

Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.

Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling

Procedia PDF Downloads 66
278 A Case Report: The Role of Gut Directed Hypnotherapy in Resolution of Irritable Bowel Syndrome in a Medication Refractory Pediatric Male Patient

Authors: Alok Bapatla, Pamela Lutting, Mariastella Serrano

Abstract:

Background: Irritable Bowel Syndrome (IBS) is a functional gastrointestinal disorder characterized by abdominal pain associated with altered bowel habits in the absence of an underlying organic cause. Although the exact etiology of IBS is not fully understood, one of the leading theories postulates a pathology within the Brain-Gut Axis that leads to an overall increase in gastrointestinal sensitivity and pejorative changes in gastrointestinal motility. Research and clinical practice have shown that Gut Directed Hypnotherapy (GDH) has a beneficial clinical role in improving Mind-Gut control and thereby comorbid conditions such as anxiety, abdominal pain, constipation, and diarrhea. Aims: This study presents a 17-year old male with underlying anxiety and a one-year history of IBS-Constipation Predominant Subtype (IBS-C), who has demonstrated impressive improvement of symptoms following GDH treatment following refractory trials with medications including bisacodyl, senna, docusate, magnesium citrate, lubiprostone, linaclotide. Method: The patient was referred to a licensed clinical psychologist specializing in clinical hypnosis and cognitive-behavioral therapy (CBT), who implemented “The Standardized Hypnosis Protocol for IBS” developed by Dr. Olafur S. Palsson, Psy.D at the University of North Carolina at Chapel Hill. The hypnotherapy protocol consisted of a total of seven weekly 45-minute sessions supplemented with a 20-minute audio recording to be listened to once daily. Outcome variables included the GAD-7, PHQ-9 and DCI-2, as well as self-ratings (ranging 0-10) for pain (intensity and frequency), emotional distress about IBS symptoms, and overall emotional distress. All variables were measured at intake prior to administration of the hypnosis protocol and at the conclusion of the hypnosis treatment. A retrospective IBS Questionnaire (IBS Severity Scoring System) was also completed at the conclusion of the GDH treatment for pre-and post-test ratings of clinical symptoms. Results: The patient showed improvement in all outcome variables and self-ratings, including abdominal pain intensity, frequency of abdominal pain episodes, emotional distress relating to gut issues, depression, and anxiety. The IBS Questionnaire showed a significant improvement from a severity score of 400 (defined as severe) prior to GDH intervention compared to 55 (defined as complete resolution) at four months after the last session. IBS Questionnaire subset questions that showed a significant score improvement included abdominal pain intensity, days of pain experienced per 10 days, satisfaction with bowel habits, and overall interference of life affected by IBS symptoms. Conclusion: This case supports the existing research literature that GDH has a significantly beneficial role in improving symptoms in patients with IBS. Emphasis is placed on the numerical results of the IBS Questionnaire scoring, which reflects a patient who initially suffered from severe IBS with failed response to multiple medications, who subsequently showed full and sustained resolution

Keywords: pediatrics, constipation, irritable bowel syndrome, hypnotherapy, gut-directed hypnosis

Procedia PDF Downloads 174
277 Navigating the Future: Evaluating the Market Potential and Drivers for High-Definition Mapping in the Autonomous Vehicle Era

Authors: Loha Hashimy, Isabella Castillo

Abstract:

In today's rapidly evolving technological landscape, the importance of precise navigation and mapping systems cannot be understated. As various sectors undergo transformative changes, the market potential for Advanced Mapping and Management Systems (AMMS) emerges as a critical focus area. The Galileo/GNSS-Based Autonomous Mobile Mapping System (GAMMS) project, specifically targeted toward high-definition mapping (HDM), endeavours to provide insights into this market within the broader context of the geomatics and navigation fields. With the growing integration of Autonomous Vehicles (AVs) into our transportation systems, the relevance and demand for sophisticated mapping solutions like HDM have become increasingly pertinent. The research employed a meticulous, lean, stepwise, and interconnected methodology to ensure a comprehensive assessment. Beginning with the identification of pivotal project results, the study progressed into a systematic market screening. This was complemented by an exhaustive desk research phase that delved into existing literature, data, and trends. To ensure the holistic validity of the findings, extensive consultations were conducted. Academia and industry experts provided invaluable insights through interviews, questionnaires, and surveys. This multi-faceted approach facilitated a layered analysis, juxtaposing secondary data with primary inputs, ensuring that the conclusions were both accurate and actionable. Our investigation unearthed a plethora of drivers steering the HD maps landscape. These ranged from technological leaps, nuanced market demands, and influential economic factors to overarching socio-political shifts. The meteoric rise of Autonomous Vehicles (AVs) and the shift towards app-based transportation solutions, such as Uber, stood out as significant market pull factors. A nuanced PESTEL analysis further enriched our understanding, shedding light on political, economic, social, technological, environmental, and legal facets influencing the HD maps market trajectory. Simultaneously, potential roadblocks were identified. Notable among these were barriers related to high initial costs, concerns around data quality, and the challenges posed by a fragmented and evolving regulatory landscape. The GAMMS project serves as a beacon, illuminating the vast opportunities that lie ahead for the HD mapping sector. It underscores the indispensable role of HDM in enhancing navigation, ensuring safety, and providing pinpoint, accurate location services. As our world becomes more interconnected and reliant on technology, HD maps emerge as a linchpin, bridging gaps and enabling seamless experiences. The research findings accentuate the imperative for stakeholders across industries to recognize and harness the potential of HD mapping, especially as we stand on the cusp of a transportation revolution heralded by Autonomous Vehicles and advanced geomatic solutions.

Keywords: high-definition mapping (HDM), autonomous vehicles, PESTEL analysis, market drivers

Procedia PDF Downloads 54
276 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 207
275 Inverse Problem Method for Microwave Intrabody Medical Imaging

Authors: J. Chamorro-Servent, S. Tassani, M. A. Gonzalez-Ballester, L. J. Roca, J. Romeu, O. Camara

Abstract:

Electromagnetic and microwave imaging (MWI) have been used in medical imaging in the last years, being the most common applications of breast cancer and stroke detection or monitoring. In those applications, the subject or zone to observe is surrounded by a number of antennas, and the Nyquist criterium can be satisfied. Additionally, the space between the antennas (transmitting and receiving the electromagnetic fields) and the zone to study can be prepared in a homogeneous scenario. However, this may differ in other cases as could be intracardiac catheters, stomach monitoring devices, pelvic organ systems, liver ablation monitoring devices, or uterine fibroids’ ablation systems. In this work, we analyzed different MWI algorithms to find the most suitable method for dealing with an intrabody scenario. Due to the space limitations usually confronted on those applications, the device would have a cylindrical configuration of a maximum of eight transmitters and eight receiver antennas. This together with the positioning of the supposed device inside a body tract impose additional constraints in order to choose a reconstruction method; for instance, it inhabitants the use of well-known algorithms such as filtered backpropagation for diffraction tomography (due to the unusual configuration with probes enclosed by the imaging region). Finally, the difficulty of simulating a realistic non-homogeneous background inside the body (due to the incomplete knowledge of the dielectric properties of other tissues between the antennas’ position and the zone to observe), also prevents the use of Born and Rytov algorithms due to their limitations with a heterogeneous background. Instead, we decided to use a time-reversed algorithm (mostly used in geophysics) due to its characteristics of ignoring heterogeneities in the background medium, and of focusing its generated field onto the scatters. Therefore, a 2D time-reversed finite difference time domain was developed based on the time-reversed approach for microwave breast cancer detection. Simultaneously an in-silico testbed was also developed to compare ground-truth dielectric properties with corresponding microwave imaging reconstruction. Forward and inverse problems were computed varying: the frequency used related to a small zone to observe (7, 7.5 and 8 GHz); a small polyp diameter (5, 7 and 10 mm); two polyp positions with respect to the closest antenna (aligned or disaligned); and the (transmitters-to-receivers) antenna combination used for the reconstruction (1-1, 8-1, 8-8 or 8-3). Results indicate that when using the existent time-reversed method for breast cancer here for the different combinations of transmitters and receivers, we found false positives due to the high degrees of freedom and unusual configuration (and the possible violation of Nyquist criterium). Those false positives founded in 8-1 and 8-8 combinations, highly reduced with the 1-1 and 8-3 combination, being the 8-3 configuration de most suitable (three neighboring receivers at each time). The 8-3 configuration creates a region-of-interest reduced problem, decreasing the ill-posedness of the inverse problem. To conclude, the proposed algorithm solves the main limitations of the described intrabody application, successfully detecting the angular position of targets inside the body tract.

Keywords: FDTD, time-reversed, medical imaging, microwave imaging

Procedia PDF Downloads 102
274 Thai Cane Farmers' Responses to Sugar Policy Reforms: An Intentions Survey

Authors: Savita Tangwongkit, Chittur S Srinivasan, Philip J. Jones

Abstract:

Thailand has become the world’s fourth largest sugarcane producer and second largest sugar exporter. While there have been a number of drivers of this growth, the primary driver has been wide-ranging government support measures. Recently, the Thai government has emphasized the need for policy reform as part of a broader industry restructuring to bring the sector up-to-date with the current and future developments in the international sugar market. Because of the sectors historical dependence on government support, any such reform is likely to have a very significant impact on the fortunes of Thai cane farmers. This study explores the impact of three policy scenarios, representing a spectrum of policy approaches, on Thai cane producers. These reform scenarios were designed in consultation with policy makers and academics working in the cane sector. Scenario 1 captures the current ‘government proposal’ for policy reform. This scenario removes certain domestic production subsidies but seeks to maintain as much support as is permissible under current WTO rules. The second scenario, ‘protectionism’, maintains the current internal market producer supports, but otherwise complies with international (WTO) commitments. Third, the ‘libertarian scenario’ removes all production support and market interventions, trade and domestic consumption distortions. Most important driver of producer behaviour in all of the scenarios is the producer price of cane. Cane price is obviously highest under the protectionism scenario, followed by government proposal and libertarian scenarios, respectively. Likely producer responses to these three policy scenarios was determined by means of a large-scale survey of cane farmers. The sample was stratified by size group and quotas filled by size group and region. One scenario was presented to each of three sub-samples, consisting of approx.150 farmers. Total sample size was 462 farms. Data was collected by face-to-face interview between June and August 2019. There was a marked difference in farmer response to the three scenarios. Farmers in the ‘Protectionism’ scenario, which maintains the highest cane price and those who farm larger cane areas are more likely to continue cane farming. The libertarian scenario is likely to result in the greatest losses in terms of cane production volume broadly double that of the ‘protectionism’ scenario, primarily due to farmers quitting cane production altogether. Over half of loss cane production volume comes from medium-size farm, i.e. the largest and smallest producers are the most resilient. This result is likely due to the fact that the medium size group are large enough to require hired labour but lack the economies of scale of the largest farms. Over all size groups the farms most heavily specialized in cane production, i.e. those devoting 26-50% of arable land to cane, are also the most vulnerable, with 70% of all farmers quitting cane production coming from this group. This investigation suggests that cane price is the most significant determinant of farmer behaviour. Also, that where scenarios drive significantly lower cane price, policy makers should target support towards mid-sized producers, with policies that encourage efficiency gains and diversification into alternative agricultural crops.

Keywords: farmer intentions, farm survey, policy reform, Thai cane production

Procedia PDF Downloads 92
273 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 73
272 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators

Authors: N. Naz, A. D. Domenico, M. N. Huda

Abstract:

Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.

Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator

Procedia PDF Downloads 68
271 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing

Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev

Abstract:

The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.

Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect

Procedia PDF Downloads 105
270 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 239
269 Lentiviral-Based Novel Bicistronic Therapeutic Vaccine against Chronic Hepatitis B Induces Robust Immune Response

Authors: Mohamad F. Jamiluddin, Emeline Sarry, Ana Bejanariu, Cécile Bauche

Abstract:

Introduction: Over 360 million people are chronically infected with hepatitis B virus (HBV), of whom 1 million die each year from HBV-associated liver cirrhosis or hepatocellular carcinoma. Current treatment options for chronic hepatitis B depend on interferon-α (IFNα) or nucleos(t)ide analogs, which control virus replication but rarely eliminate the virus. Treatment with PEG-IFNα leads to a sustained antiviral response in only one third of patients. After withdrawal of the drugs, the rebound of viremia is observed in the majority of patients. Furthermore, the long-term treatment is subsequently associated with the appearance of drug resistant HBV strains that is often the cause of the therapy failure. Among the new therapeutic avenues being developed, therapeutic vaccine aimed at inducing immune responses similar to those found in resolvers is of growing interest. The high prevalence of chronic hepatitis B necessitates the design of better vaccination strategies capable of eliciting broad-spectrum of cell-mediated immunity(CMI) and humoral immune response that can control chronic hepatitis B. Induction of HBV-specific T cells and B cells by therapeutic vaccination may be an innovative strategy to overcome virus persistence. Lentiviral vectors developed and optimized by THERAVECTYS, due to their ability to transduce non-dividing cells, including dendritic cells, and induce CMI response, have demonstrated their effectiveness as vaccination tools. Method: To develop a HBV therapeutic vaccine that can induce a broad but specific immune response, we generated recombinant lentiviral vector carrying IRES(Internal Ribosome Entry Site)-containing bicistronic constructs which allow the coexpression of two vaccine products, namely HBV T- cell epitope vaccine and HBV virus like particle (VLP) vaccine. HBV T-cell epitope vaccine consists of immunodominant cluster of CD4 and CD8 epitopes with spacer in between them and epitopes are derived from HBV surface protein, HBV core, HBV X and polymerase. While HBV VLP vaccine is a HBV core protein based chimeric VLP with surface protein B-cell epitopes displayed. In order to evaluate the immunogenicity, mice were immunized with lentiviral constructs by intramuscular injection. The T cell and antibody immune responses of the two vaccine products were analyzed using IFN-γ ELISpot assay and ELISA respectively to quantify the adaptive response to HBV antigens. Results: Following a single administration in mice, lentiviral construct elicited robust antigen-specific IFN-γ responses to the encoded antigens. The HBV T- cell epitope vaccine demonstrated significantly higher T cell immunogenicity than HBV VLP vaccine. Importantly, we demonstrated by ELISA that antibodies are induced against both HBV surface protein and HBV core protein when mice injected with vaccine construct (p < 0.05). Conclusion: Our results highlight that THERAVECTYS lentiviral vectors may represent a powerful platform for immunization strategy against chronic hepatitis B. Our data suggests the likely importance of Lentiviral vector based novel bicistronic construct for further study, in combination with drugs or as standalone antigens, as a therapeutic lentiviral based HBV vaccines. THERAVECTYS bicistronic HBV vaccine will be further evaluated in animal efficacy studies.

Keywords: chronic hepatitis B, lentiviral vectors, therapeutic vaccine, virus-like particle

Procedia PDF Downloads 313
268 Delivering User Context-Sensitive Service in M-Commerce: An Empirical Assessment of the Impact of Urgency on Mobile Service Design for Transactional Apps

Authors: Daniela Stephanie Kuenstle

Abstract:

Complex industries such as banking or insurance experience slow growth in mobile sales. While today’s mobile applications are sophisticated and enable location based and personalized services, consumers prefer online or even face-to-face services to complete complex transactions. A possible reason for this reluctance is that the provided service within transactional mobile applications (apps) does not adequately correspond to users’ needs. Therefore, this paper examines the impact of the user context on mobile service (m-service) in m-commerce. Motivated by the potential which context-sensitive m-services hold for the future, the impact of temporal variations as a dimension of user context, on m-service design is examined. In particular, the research question asks: Does consumer urgency function as a determinant of m-service composition in transactional apps by moderating the relation between m-service type and m-service success? Thus, the aim is to explore the moderating influence of urgency on m-service types, which includes Technology Mediated Service and Technology Generated Service. While mobile applications generally comprise features of both service types, this thesis discusses whether unexpected urgency changes customer preferences for m-service types and how this consequently impacts the overall m-service success, represented by purchase intention, loyalty intention and service quality. An online experiment with a random sample of N=1311 participants was conducted. Participants were divided into four treatment groups varying in m-service types and urgency level. They were exposed to two different urgency scenarios (high/ low) and two different app versions conveying either technology mediated or technology generated service. Subsequently, participants completed a questionnaire to measure the effectiveness of the manipulation as well as the dependent variables. The research model was tested for direct and moderating effects of m-service type and urgency on m-service success. Three two-way analyses of variance confirmed the significance of main effects, but demonstrated no significant moderation of urgency on m-service types. The analysis of the gathered data did not confirm a moderating effect of urgency between m-service type and service success. Yet, the findings propose an additive effects model with the highest purchase and loyalty intention for Technology Generated Service and high urgency, while Technology Mediated Service and low urgency demonstrate the strongest effect for service quality. The results also indicate an antagonistic relation between service quality and purchase intention depending on the level of urgency. Although a confirmation of the significance of this finding is required, it suggests that only service convenience, as one dimension of mobile service quality, delivers conditional value under high urgency. This suggests a curvilinear pattern of service quality in e-commerce. Overall, the paper illustrates the complex interplay of technology, user variables, and service design. With this, it contributes to a finer-grained understanding of the relation between m-service design and situation dependency. Moreover, the importance of delivering situational value with apps depending on user context is emphasized. Finally, the present study raises the demand to continue researching the impact of situational variables on m-service design in order to develop more sophisticated m-services.

Keywords: mobile consumer behavior, mobile service design, mobile service success, self-service technology, situation dependency, user-context sensitivity

Procedia PDF Downloads 250
267 DSF Elements in High-Rise Timber Buildings

Authors: Miroslav Premrov, Andrej Štrukelj, Erika Kozem Šilih

Abstract:

The utilization of prefabricated timber-wall elements with double glazing, called as double-skin façade element (DSF), represents an innovative structural approach in the context of new high-rise timber construction, simultaneously combining sustainable solutions with improved energy efficiency and living quality. In addition to the minimum energy needs of buildings, the design of modern buildings is also increasingly focused on the optimal indoor comfort, in particular on sufficient natural light indoors. An optimally energy-designed building with an optimal layout of glazed areas around the building envelope represents a great potential in modern timber construction. Usually, all these transparent façade elements, because of energy benefits, are primary asymmetrical oriented and if they are considered as non-resisting against a horizontal load impact, a strong torsion effects in the building can appear. The problem of structural stability against a strong horizontal load impact of such modern timber buildings especially increase in a case of high-rise structures where additional bracing elements have to be used. In such a case, special diagonal bracing systems or other bracing solutions with common timber wall elements have to be incorporated into the structure of the building to satisfy all prescribed resisting requirements given by the standards. However, all such structural solutions are usually not environmentally friendly and also not contribute to an improved living comfort, or they are not accepted by the architects at all. Consequently, it is a special need to develop innovative load-bearing timber-glass wall elements which are in the same time environmentally friendly, can increase internal comfort in the building, but are also load-bearing. The new developed load-bearing DSF elements can be a good answer on all these requirements. Timber-glass façade elements DSF wall elements consist of two transparent layers, thermal-insulated three-layered glass pane on the internal side and an additional single-layered glass pane on the external side of the wall. The both panes are separated by an air channel which can be of any dimensions and can have a significant influence on the thermal insulation or acoustic response of such a wall element. Most already published studies on DSF elements primarily deal only with energy and LCA solutions and do not address any structural problems. In previous studies according to experimental analysis and mathematical modeling it was already presented a possible benefit of such load-bearing DSF elements, especially comparing with previously developed load-bearing single-skin timber wall elements, but they were not applicate yet in any high-rise timber structure. Therefore, in the presented study specially selected 10-storey prefabricated timber building constructed in a cross-laminated timber (CLT) structural wall system is analyzed using the developed DSF elements in a sense to increase a structural lateral stability of the whole building. The results evidently highlight the importance the load-bearing DSF elements, as their incorporation can have a significant impact on the overall behavior of the structure through their influence on the stiffness properties. Taking these considerations into account is crucial to ensure compliance with seismic design codes and to improve the structural resilience of high-rise timber buildings.

Keywords: glass, high-rise buildings, numerical analysis, timber

Procedia PDF Downloads 24
266 Growth Patterns of Pyrite Crystals Studied by Electron Back Scatter Diffraction (EBSD)

Authors: Kirsten Techmer, Jan-Erik Rybak, Simon Rudolph

Abstract:

Natural formed pyrites (FeS2) are frequent sulfides in sedimentary and metamorphic rocks. Growth textures of idiomorphic pyrite assemblages reflect the conditions during their formation in the geologic sequence, furtheron the local texture analyses of the growth patterns of pyrite assemblages by EBSD reveal the possibility to resolve the growth conditions during the formation of pyrite at the micron scale. The spatial resolution of local texture measurements in the Scanning Electron Microscope used can be in the nanomete scale. Orientation contrasts resulting from domains of smaller misorientations within larger pyrite crystals can be resolved as well. The electron optical studies have been carried out in a Field-Emission Scanning Electron Microscope (FEI Quanta 200) equipped with a CCD camera to study the orientation contrasts along the surfaces of pyrite. Idiomorphic cubic single crystals of pyrite, polycrystalline assemblages of pyrite, spherically grown spheres of pyrite as well as pyrite-bearing ammonites have been studied by EBSD in the Scanning Electron Microscope. Samples were chosen to show no or minor secondary deformation and an idiomorphic 3D crystal habit, so the local textures of pyrite result mainly from growth and minor from deformation. The samples studied derived from Navajun (Spain), Chalchidiki (Greece), Thüringen (Germany) and Unterkliem (Austria). Chemical analyses by EDAX show pyrite with minor inhomogeneities e.g., single crystals of galena and chalcopyrite along the grain boundaries of larger pyrite crystals. Intergrowth between marcasite and pyrite can be detected in one sample. Pyrite may form intense growth twinning lamellae on {011}. Twinning, e.g., contact twinning is abundant within the crystals studied and the individual twinning lamellaes can be resolved by EBSD. The ammonites studied show a replacement of the shale by newly formed pyrite resulting in an intense intergrowth of calcite and pyrite. EBSD measurements indicate a polycrystalline microfabric of both minerals, still reflecting primary surface structures of the ammonites e.g, the Septen. Discs of pyrite (“pyrite dollar”) as well as pyrite framboids show growth patterns comprising a typical microfabric. EBSD studies reveal an equigranular matrix in the inner part of the discs of pyrite and a fiber growth with larger misorientations in the outer regions between the individual segments. This typical microfabric derived from a formation of pyrite crystals starting at a higher nucleation rate and followed by directional crystal growth. EBSD studies show, that the growth texture of pyrite in the samples studied reveals a correlation between nucleation rate and following growth rate of the pyrites, thus leading to the characteristic crystal habits. Preferential directional growth at lower nucleation rates may lead to the formation of 3D framboids of pyrite. Crystallographic misorientations between the individual fibers are similar. In ammonites studied, primary anisotropies of the substrates like e.g., ammonitic sutures, influence the nucleation, crystal growth and habit of the newly formed pyrites along the surfaces.

Keywords: Electron Back Scatter Diffraction (EBSD), growth pattern, Fe-sulfides (pyrite), texture analyses

Procedia PDF Downloads 272
265 Using Low-Calorie Gas to Generate Heat and Electricity

Authors: Аndrey Marchenko, Oleg Linkov, Alexander Osetrov, Sergiy Kravchenko

Abstract:

The low-calorie of gases include biogas, coal gas, coke oven gas, associated petroleum gas, gases sewage, etc. These gases are usually released into the atmosphere or burned on flares, causing substantial damage to the environment. However, with the right approach, low-calorie gas fuel can become a valuable source of energy. Specified determines the relevance of areas related to the development of low-calorific gas utilization technologies. As an example, in the work considered one of way of utilization of coalmine gas, because Ukraine ranks fourth in the world in terms of coal mine gas emission (4.7% of total global emissions, or 1.2 billion m³ per year). Experts estimate that coal mine gas is actively released in the 70-80 percent of existing mines in Ukraine. The main component of coal mine gas is methane (25-60%) Methane in 21 times has a greater impact on the greenhouse effect than carbon dioxide disposal problem has become increasingly important in the context of the increasing need to address the problems of climate, ecology and environmental protection. So marked causes negative effect of both local and global nature. The efforts of the United Nations and the World Bank led to the adoption of the program 'Zero Routine Flaring by 2030' dedicated to the cessation of these gases burn in flares and disposing them with the ability to generate heat and electricity. This study proposes to use coal gas as a fuel for gas engines to generate heat and electricity. Analyzed the physical-chemical properties of low-calorie gas fuels were allowed to choose a suitable engine, as well as estimate the influence of the composition of the fuel at its techno-economic indicators. Most suitable for low-calorie gas is engine with pre-combustion chamber jet ignition. In Ukraine is accumulated extensive experience in exploitation and production of gas engines with capacity of 1100 kW type GD100 (10GDN 207/2 * 254) fueled by natural gas. By using system pre- combustion chamber jet ignition and quality control in the engines type GD100 introduces the concept of burning depleted burn fuel mixtures, which in turn leads to decrease in the concentration of harmful substances of exhaust gases. The main problems of coal mine gas as a fuel for ICE is low calorific value, the presence of components that adversely affect combustion processes and terms of operation of the ICE, the instability of the composition, weak ignition. In some cases, these problems can be solved by adaptation engine design using coal mine gas as fuel (changing compression ratio, fuel injection quantity increases, change ignition time, increase energy plugs, etc.). It is shown that the use of coal mine gas engines with prechamber has not led to significant changes in the indicator parameters (ηi = 0.43 - 0.45). However, this significantly increases the volumetric fuel consumption, which requires increased fuel injection quantity to ensure constant nominal engine power. Thus, the utilization of low-calorie gas fuels in stationary gas engine type-based GD100 will significantly reduce emissions of harmful substances into the atmosphere when the generate cheap electricity and heat.

Keywords: gas engine, low-calorie gas, methane, pre-combustion chamber, utilization

Procedia PDF Downloads 245
264 Magnetic Solid-Phase Separation of Uranium from Aqueous Solution Using High Capacity Diethylenetriamine Tethered Magnetic Adsorbents

Authors: Amesh P, Suneesh A S, Venkatesan K A

Abstract:

The magnetic solid-phase extraction is a relatively new method among the other solid-phase extraction techniques for the separating of metal ions from aqueous solutions, such as mine water and groundwater, contaminated wastes, etc. However, the bare magnetic particles (Fe3O4) exhibit poor selectivity due to the absence of target-specific functional groups for sequestering the metal ions. The selectivity of these magnetic particles can be remarkably improved by covalently tethering the task-specific ligands on magnetic surfaces. The magnetic particles offer a number of advantages such as quick phase separation aided by the external magnetic field. As a result, the solid adsorbent can be prepared with the particle size ranging from a few micrometers to the nanometer, which again offers the advantages such as enhanced kinetics of extraction, higher extraction capacity, etc. Conventionally, the magnetite (Fe3O4) particles were prepared by the hydrolysis and co-precipitation of ferrous and ferric salts in aqueous ammonia solution. Since the covalent linking of task-specific functionalities on Fe3O4 was difficult, and it is also susceptible to redox reaction in the presence of acid or alkali, it is necessary to modify the surface of Fe3O4 by silica coating. This silica coating is usually carried out by hydrolysis and condensation of tetraethyl orthosilicate over the surface of magnetite to yield a thin layer of silica-coated magnetite particles. Since the silica-coated magnetite particles amenable for further surface modification, it can be reacted with task-specific functional groups to obtain the functionalized magnetic particles. The surface area exhibited by such magnetic particles usually falls in the range of 50 to 150 m2.g-1, which offer advantage such as quick phase separation, as compared to the other solid-phase extraction systems. In addition, the magnetic (Fe3O4) particles covalently linked on mesoporous silica matrix (MCM-41) and task-specific ligands offer further advantages in terms of extraction kinetics, high stability, longer reusable cycles, and metal extraction capacity, due to the large surface area, ample porosity and enhanced number of functional groups per unit area on these adsorbents. In view of this, the present paper deals with the synthesis of uranium specific diethylenetriamine ligand (DETA) ligand anchored on silica-coated magnetite (Fe-DETA) as well as on magnetic mesoporous silica (MCM-Fe-DETA) and studies on the extraction of uranium from aqueous solution spiked with uranium to mimic the mine water or groundwater contaminated with uranium. The synthesized solid-phase adsorbents were characterized by FT-IR, Raman, TG-DTA, XRD, and SEM. The extraction behavior of uranium on the solid-phase was studied under several conditions like the effect of pH, initial concentration of uranium, rate of extraction and its variation with pH and initial concentration of uranium, effect of interference ions like CO32-, Na+, Fe+2, Ni+2, and Cr+3, etc. The maximum extraction capacity of 233 mg.g-1 was obtained for Fe-DETA, and a huge capacity of 1047 mg.g-1 was obtained for MCM-Fe-DETA. The mechanism of extraction, speciation of uranium, extraction studies, reusability, and the other results obtained in the present study suggests Fe-DETA and MCM-Fe-DETA are the potential candidates for the extraction of uranium from mine water, and groundwater.

Keywords: diethylenetriamine, magnetic mesoporous silica, magnetic solid-phase extraction, uranium extraction, wastewater treatment

Procedia PDF Downloads 141
263 Via ad Reducendam Intensitatem Energiae Industrialis in Provincia Sino ad Conservationem Energiae

Authors: John Doe

Abstract:

This paper presents the research project “Escape Through Culture”, which is co-funded by the European Union and national resources through the Operational Programme “Competitiveness, Entrepreneurship and Innovation” 2014-2020 and the Single RTDI State Aid Action "RESEARCH - CREATE - INNOVATE". The project implementation is assumed by three partners, (1) the Computer Technology Institute and Press "Diophantus" (CTI), experienced with the design and implementation of serious games, natural language processing and ICT in education, (2) the Laboratory of Environmental Communication and Audiovisual Documentation (LECAD), part of the University of Thessaly, Department of Architecture, which is experienced with the study of creative transformation and reframing of the urban and environmental multimodal experiences through the use of AR and VR technologies, and (3) “Apoplou”, an IT Company with experience in the implementation of interactive digital applications. The research project proposes the design of innovative infrastructure of digital educational escape games for mobile devices and computers, with the use of Virtual Reality and Augmented Reality for the promotion of Greek cultural heritage in Greece and abroad. In particular, the project advocates the combination of Greek cultural heritage and literature, digital technologies advancements and the implementation of innovative gamifying practices. The cultural experience of the players will take place in 3 layers: (1) In space: the digital games produced are going to utilize the dual character of the space as a cultural landscape (the real space - landscape but also the space - landscape as presented with the technologies of augmented reality and virtual reality). (2) In literary texts: the selected texts of Greek writers will support the sense of place and the multi-sensory involvement of the user, through the context of space-time, language and cultural characteristics. (3) In the philosophy of the "escape game" tool: whether played in a computer environment, indoors or outdoors, the spatial experience is one of the key components of escape games. The innovation of the project lies both in the junction of Augmented/Virtual Reality with the promotion of cultural points of interest, as well as in the interactive, gamified practices of literary texts. The digital escape game infrastructure will be highly interactive, integrating the projection of Greek landscape cultural elements and digital literary text analysis, supporting the creation of escape games, establishing and highlighting new playful ways of experiencing iconic cultural places, such as Elefsina, Skiathos etc. The literary texts’ content will relate to specific elements of the Greek cultural heritage depicted by prominent Greek writers and poets. The majority of the texts will originate from Greek educational content available in digital libraries and repositories developed and maintained by CTI. The escape games produced will be available for use during educational field trips, thematic tourism holidays, etc. In this paper, the methodology adopted for infrastructure development will be presented. The research is based on theories of place, gamification, gaming development, making use of corpus linguistics concepts and digital humanities practices for the compilation and the analysis of literary texts.

Keywords: escape games, cultural landscapes, gamification, digital humanities, literature

Procedia PDF Downloads 209