Search results for: complement clause complement selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2685

Search results for: complement clause complement selection

225 Augusto De Campos Translator: The Role of Translation in Brazilian Concrete Poetry Project

Authors: Juliana C. Salvadori, Jose Carlos Felix

Abstract:

This paper aims at discussing the role literary translation has played in Brazilian Concrete Poetry Movement – an aesthetic, critical and pedagogical project which conceived translation as poiesis, i.e., as both creative and critic work in which the potency (dynamic) of literary work is unfolded in the interpretive and critic act (energeia) the translating practice demands. We argue that translation, for concrete poets, is conceived within the framework provided by the reinterpretation –or deglutition– of Oswald de Andrade’s anthropophagy – a carefully selected feast from which the poets pick and model their Paideuma. As a case study, we propose to approach and analyze two of Augusto de Campos’s long-term translation projects: the translation of Emily Dickinson’s and E. E. Cummings’s works to Brazilian readers. Augusto de Campos is a renowned poet, translator, critic and one of the founding members of Brazilian Concrete Poetry movement. Since the 1950s he has produced a consistent body of translated poetry from English-speaking poets in which the translator has explored creative translation processes – transcreation, as concrete poets have named it. Campos’s translation project regarding E. E. Cummings’s poetry comprehends a span of forty years: it begins in 1956 with 10 poems and unfolds in 4 works – 20 poem(a)s, 40 poem(a)s, Poem(a)s, re-edited in 2011. His translations of Dickinson’s poetry are published in two works: O Anticrítico (1986), in which he translated 10 poems, and Emily Dickinson Não sou Ninguém (2008), in which the poet-translator added 35 more translated poems. Both projects feature bilingual editions: contrary to common sense, Campos translations aim at being read as such: the target readers, to fully enjoy the experience, must be proficient readers of English and, also, acquainted with the poets in translation – Campos expects us to perform translation criticism, as Antoine Berman has proposed, by assessing the choices he, as both translator and poet, has presented in order to privilege aesthetic information (verse lines, word games, etc.). To readers not proficient in English, his translations play a pedagogycal role of educating and preparing them to read both the target poet works as well as concrete poetry works – the detailed essays and prefaces in which the translator emphasizes the selection of works translated and strategies adopted enlighten his project as translator: for Cummings, it has led to the oblieraton of the more traditional and lyrical/romantic examples of his poetry while highlighting the more experimental aspects and poems; for Dickinson, his project has highligthed the more hermetic traits of her poems. To the domestic canons of both poets in Brazilian literary system, we analyze Campos’ contribution in this work.

Keywords: translation criticism, Augusto de Campos, E. E. Cummings, Emily Dickinson

Procedia PDF Downloads 295
224 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach

Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta

Abstract:

Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.

Keywords: compostela, literary geography, literary cartography, tourism

Procedia PDF Downloads 392
223 Features of Fossil Fuels Generation from Bazhenov Formation Source Rocks by Hydropyrolysis

Authors: Anton G. Kalmykov, Andrew Yu. Bychkov, Georgy A. Kalmykov

Abstract:

Nowadays, most oil reserves in Russia and all over the world are hard to recover. That is the reason oil companies are searching for new sources for hydrocarbon production. One of the sources might be high-carbon formations with unconventional reservoirs. Bazhenov formation is a huge source rock formation located in West Siberia, which contains unconventional reservoirs on some of the areas. These reservoirs are formed by secondary processes with low predicting ratio. Only one of five wells is drilled through unconventional reservoirs, in others kerogen has low thermal maturity, and they are of low petroliferous. Therefore, there was a request for tertiary methods for in-situ cracking of kerogen and production of oil. Laboratory experiments of Bazhenov formation rock hydrous pyrolysis were used to investigate features of the oil generation process. Experiments on Bazhenov rocks with a different mineral composition (silica concentration from 15 to 90 wt.%, clays – 5-50 wt.%, carbonates – 0-30 wt.%, kerogen – 1-25 wt.%) and thermal maturity (from immature to late oil window kerogen) were performed in a retort under reservoir conditions. Rock samples of 50 g weight were placed in retort, covered with water and heated to the different temperature varied from 250 to 400°C with the durability of the experiments from several hours to one week. After the experiments, the retort was cooled to room temperature; generated hydrocarbons were extracted with hexane, then separated from the solvent and weighted. The molecular composition of this synthesized oil was then investigated via GC-MS chromatography Characteristics of rock samples after the heating was measured via the Rock-Eval method. It was found, that the amount of synthesized oil and its composition depending on the experimental conditions and composition of rocks. The highest amount of oil was produced at a temperature of 350°C after 12 hours of heating and was up to 12 wt.% of initial organic matter content in the rocks. At the higher temperatures and within longer heating time secondary cracking of generated hydrocarbons occurs, the mass of produced oil is lowering, and the composition contains more hydrocarbons that need to be recovered by catalytical processes. If the temperature is lower than 300°C, the amount of produced oil is too low for the process to be economically effective. It was also found that silica and clay minerals work as catalysts. Selection of heating conditions allows producing synthesized oil with specified composition. Kerogen investigations after heating have shown that thermal maturity increases, but the yield is only up to 35% of the maximum amount of synthetic oil. This yield is the result of gaseous hydrocarbons formation due to secondary cracking and aromatization and coaling of kerogen. Future investigations will allow the increase in the yield of synthetic oil. The results are in a good agreement with theoretical data on kerogen maturation during oil production. Evaluated trends could be tooled up for in-situ oil generation by shale rocks thermal action.

Keywords: Bazhenov formation, fossil fuels, hydropyrolysis, synthetic oil

Procedia PDF Downloads 114
222 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection

Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi

Abstract:

During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.

Keywords: coating, stainless steel, tribology, wear

Procedia PDF Downloads 150
221 The Use of Brachytherapy in the Treatment of Liver Metastases: A Systematic Review

Authors: Mateusz Bilski, Jakub Klas, Emilia Kowalczyk, Sylwia Koziej, Katarzyna Kulszo, Ludmiła Grzybowska- Szatkowska

Abstract:

Background: Liver metastases are a common complication of primary solid tumors and sig-nificantly reduce patient survival. In the era of increasing diagnosis of oligometastatic disease and oligoprogression, methods of local treatment of metastases, i.e. MDT, are becoming more important. Implementation of such treatment can be considered for liver metastases, which are a common complication of primary solid tumors and significantly reduce patient survival. To date, the mainstay of treatment for oligometastatic disease has been surgical resection, but not all patients qualify for the procedure. As an alternative to surgical resection, radiotherapy techniques have become available, including stereotactic body radiation therapy (SBRT) or high-dose interstitial brachytherapy (iBT). iBT is an invasive method that emits very high doses of radiation from the inside of the tumor to the outside. This technique provides better tumor coverage than SBRT while having little impact on surrounding healthy tissue and elim-inates some concerns involving respiratory motion. Methods: We conducted a systematic re-view of the scientific literature on the use of brachytherapy in the treatment of liver metasta-ses from 2018 - 2023 using PubMed and ResearchGate browsers according to PRISMA rules. Results: From 111 articles, 18 publications containing information on 729 patients with liver metastases were selected. iBT has been shown to provide high rates of tumor control. Among 14 patients with 54 unresectable RCC liver metastases, after iBT LTC was 92.6% during a median follow-up of 10.2 months, PFS was 3.4 months. In analysis of 167 patients after treatment with a single fractional dose of 15-25 Gy with brachytherapy at 6- and 12-month follow-up, LRFS rates of 88,4-88.7% and 70.7 - 71,5%, PFS of 78.1 and 53.8%, and OS of 92.3 - 96.7% and 76,3% - 79.6%, respectively, were achieved. No serious complications were observed in all patients. Distant intrahepatic progression occurred later in patients with unre-sectable liver metastases after brachytherapy (PFS: 19.80 months) than in HCC patients (PFS: 13.50 months). A significant difference in LRFS between CRC patients (84.1% vs. 50.6%) and other histologies (92.4% vs. 92.4%) was noted, suggesting a higher treatment dose is necessary for CRC patients. The average target dose for metastatic colorectal cancer was 40 - 60 Gy (compared to 100 - 250 Gy for HCC). To better assess sensitivity to therapy and pre-dict side effects, it has been suggested that humoral mediators be evaluated. It was also shown that baseline levels of TNF-α, MCP-1 and VEGF, as well as NGF and CX3CL corre-lated with both tumor volume and radiation-induced liver damage, one of the most serious complications of iBT, indicating their potential role as biomarkers of therapy outcome. Con-clusions: The use of brachytherapy methods in the treatment of liver metastases of various cancers appears to be an interesting and relatively safe therapeutic method alternative to sur-gery. An important challenge remains the selection of an appropriate brachytherapy method and radiation dose for the corresponding initial tumor type from which the metastasis origi-nated.

Keywords: liver metastases, brachytherapy, CT-HDRBT, iBT

Procedia PDF Downloads 114
220 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 161
219 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites

Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek

Abstract:

Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.

Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding

Procedia PDF Downloads 209
218 Interconnections of Circular Economy, Circularity, and Sustainability: A Systematic Review and Conceptual Framework

Authors: Anteneh Dagnachew Sewenet, Paola Pisano

Abstract:

The concept of circular economy, circularity, and sustainability are interconnected and promote a more sustainable future. However, previous studies have mainly focused on each concept individually, neglecting the relationships and gaps in the existing literature. This study aims to integrate and link these concepts to expand the theoretical and practical methods of scholars and professionals in pursuit of sustainability. The aim of this systematic literature review is to comprehensively analyze and summarize the interconnections between circular economy, circularity, and sustainability. Additionally, it seeks to develop a conceptual framework that can guide practitioners and serve as a basis for future research. The review employed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. A total of 78 articles were analyzed, utilizing the Scopus and Web of Science databases. The analysis involved summarizing and systematizing the conceptualizations of circularity and its relationship with the circular economy and long-term sustainability. The review provided a comprehensive overview of the interconnections between circular economy, circularity, and sustainability. Key themes, theoretical frameworks, empirical findings, and conceptual gaps in the literature were identified. Through a rigorous analysis of scholarly articles, the study highlighted the importance of integrating these concepts for a more sustainable future. This study contributes to the existing literature by integrating and linking the concepts of circular economy, circularity, and sustainability. It expands the theoretical understanding of how these concepts relate to each other and provides a conceptual framework that can guide future research in this field. The findings emphasize the need for a holistic approach in achieving sustainability goals. The data collection for this review involved identifying relevant articles from the Scopus and Web of Science databases. The selection of articles was made based on predefined inclusion and exclusion criteria. The PRISMA protocol guided the systematic analysis of the selected articles, including summarizing and systematizing their content. This study addressed the question of how circularity is conceptualized and related to both the circular economy and long-term sustainability. It aimed to identify the interconnections between these concepts and bridge the gap in the existing literature. The review provided a comprehensive analysis of the interconnections between the circular economy, circularity, and sustainability. It presented a conceptual framework that can guide practitioners in implementing circular economy strategies and serve as a basis for future research. By integrating these concepts, scholars, and professionals can enhance the theoretical and practical methods in pursuit of a more sustainable future. The findings emphasize the importance of taking a holistic approach to achieve sustainability goals and highlight conceptual gaps that can be addressed in future studies.

Keywords: circularity, circular economy, sustainability, innovation

Procedia PDF Downloads 106
217 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 152
216 Rural Entrepreneurship as a Response to Climate Change and Resource Conservation

Authors: Omar Romero-Hernandez, Federico Castillo, Armando Sanchez, Sergio Romero, Andrea Romero, Michael Mitchell

Abstract:

Environmental policies for resource conservation in rural areas include subsidies on services and social programs to cover living expenses. Government's expectation is that rural communities who benefit from social programs, such as payment for ecosystem services, are provided with an incentive to conserve natural resources and preserve natural sinks for greenhouse gases. At the same time, global climate change has affected the lives of people worldwide. The capability to adapt to global warming depends on the available resources and the standard of living, putting rural communities at a disadvantage. This paper explores whether rural entrepreneurship can represent a solution to resource conservation and global warming adaptation in rural communities. The research focuses on a sample of two coffee communities in Oaxaca, Mexico. Researchers used geospatial information contained in aerial photographs of the geographical areas of interest. Households were identified in the photos via the roofs of households and georeferenced via coordinates. From the household population, a random selection of roofs was performed and received a visit. A total of 112 surveys were completed, including questions of socio-demographics, perception to climate change and adaptation activities. The population includes two groups of study: entrepreneurs and non-entrepreneurs. Data was sorted, filtered, and validated. Analysis includes descriptive statistics for exploratory purposes and a multi-regression analysis. Outcomes from the surveys indicate that coffee farmers, who demonstrate entrepreneurship skills and hire employees, are more eager to adapt to climate change despite the extreme adverse socioeconomic conditions of the region. We show that farmers with entrepreneurial tendencies are more creative in using innovative farm practices such as the planting of shade trees, the use of live fencing, instead of wires, and watershed protection techniques, among others. This result counters the notion that small farmers are at the mercy of climate change and have no possibility of being able to adapt to a changing climate. The study also points to roadblocks that farmers face when coping with climate change. Among those roadblocks are a lack of extension services, access to credit, and reliable internet, all of which reduces access to vital information needed in today’s constantly changing world. Results indicate that, under some circumstances, funding and supporting entrepreneurship programs may provide more benefit than traditional social programs.

Keywords: entrepreneurship, global warming, rural communities, climate change adaptation

Procedia PDF Downloads 239
215 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 189
214 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 70
213 A Q-Methodology Approach for the Evaluation of Land Administration Mergers

Authors: Tsitsi Nyukurayi Muparari, Walter Timo De Vries, Jaap Zevenbergen

Abstract:

The nature of Land administration accommodates diversity in terms of both spatial data handling activities and the expertise involved, which supposedly aims to satisfy the unpredictable demands of land data and the diverse demands of the customers arising from the land. However, it is known that strategic decisions of restructuring are in most cases repelled in favour of complex structures that strive to accommodate professional diversity and diverse roles in the field of Land administration. Yet despite of this widely accepted knowledge, there is scanty theoretical knowledge concerning the psychological methodologies that can extract the deeper perceptions from the diverse spatial expertise in order to explain the invisible control arm of the polarised reception of the ideas of change. This paper evaluates Q methodology in the context of a cadastre and land registry merger (under one agency) using the Swedish cadastral system as a case study. Precisely, the aim of this paper is to evaluate the effectiveness of Q methodology towards modelling the diverse psychological perceptions of spatial professionals who are in a widely contested decision of merging the cadastre and land registry components of Land administration using the Swedish cadastral system as a case study. An empirical approach that is prescribed by Q methodology starts with the concourse development, followed by the design of statements and q sort instrument, selection of the participants, the q-sorting exercise, factor extraction by PQMethod and finally narrative development by logic of abduction. The paper uses 36 statements developed from a dominant competing value theory that stands out on its reliability and validity, purposively selects 19 participants to do the Qsorting exercise, proceeds with factor extraction from the diversity using varimax rotation and judgemental rotation provided by PQMethod and effect the narrative construction using the logic abduction. The findings from the diverse perceptions from cadastral professionals in the merger decision of land registry and cadastre components in Sweden’s mapping agency (Lantmäteriet) shows that focus is rather inclined on the perfection of the relationship between the legal expertise and technical spatial expertise. There is much emphasis on tradition, loyalty and communication attributes which concern the organisation’s internal environment rather than innovation and market attributes that reveals customer behavior and needs arising from the changing humankind-land needs. It can be concluded that Q methodology offers effective tools that pursues a psychological approach for the evaluation and gradations of the decisions of strategic change through extracting the local perceptions of spatial expertise.

Keywords: cadastre, factor extraction, land administration merger, land registry, q-methodology, rotation

Procedia PDF Downloads 194
212 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 75
211 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 137
210 Creation and Evaluation of an Academic Blog of Tools for the Self-Correction of Written Production in English

Authors: Brady, Imelda Katherine, Da Cunha Fanego, Iria

Abstract:

Today's university students are considered digital natives and the use of Information Technologies (ITs) forms a large part of their study and learning. In the context of language studies, applications that help with revisions of grammar or vocabulary are particularly useful, especially if they are open access. There are studies that show the effectiveness of this type of application in the learning of English as a foreign language and that using IT can help learners become more autonomous in foreign language acquisition, given that these applications can enhance awareness of the learning process; this means that learners are less dependent on the teacher for corrective feedback. We also propose that the exploitation of these technologies also enhances the work of the language instructor wishing to incorporate IT into his/her practice. In this context, the aim of this paper is to present the creation of a repository of tools that provide support in the writing and correction of texts in English and the assessment of their usefulness on behalf of university students enrolled in the English Studies Degree. The project seeks to encourage the development of autonomous learning through the acquisition of skills linked to the self-correction of written work in English. To comply with the above, our methodology follows five phases. First of all, a selection of the main open-access online applications available for the correction of written texts in English is made: AutoCrit, Hemingway, Grammarly, LanguageTool, OutWrite, PaperRater, ProWritingAid, Reverso, Slick Write, Spell Check Plus and Virtual Writing Tutor. Secondly, the functionalities of each of these tools (spelling, grammar, style correction, etc.) are analyzed. Thirdly, explanatory materials (texts and video tutorials) are prepared on each tool. Fourth, these materials are uploaded into a repository of our university in the form of an institutional blog, which is made available to students and the general public. Finally, a survey was designed to collect students’ feedback. The survey aimed to analyse the usefulness of the blog and the quality of the explanatory materials as well as the degree of usefulness that students assigned to each of the tools offered. In this paper, we present the results of the analysis of data received from 33 students in the 1st semester of the 21-22 academic year. One result we highlight in our paper is that the students have rated this resource very highly, in addition to offering very valuable information on the perceived usefulness of the applications provided for them to review. Our work, carried out within the framework of a teaching innovation project funded by our university, emphasizes that teachers need to design methodological strategies that help their students improve the quality of their productions written in English and, by extension, to improve their linguistic competence.

Keywords: academic blog, open access tools, online self-correction, written production in English, university learning

Procedia PDF Downloads 101
209 Investigating Learners’ Online Learning Experiences in a Blended-Learning School Environment

Authors: Abraham Ampong

Abstract:

BACKGROUND AND SIGNIFICANCE OF THE STUDY: The development of information technology and its influence today is inevitable in the world of education. The development of information technology and communication (ICT) has an impact on the use of teaching aids such as computers and the Internet, for example, E-learning. E-learning is a learning process attained through electronic means. But learning is not merely technology because learning is essentially more about the process of interaction between teacher, student, and source study. The main purpose of the study is to investigate learners’ online learning experiences in a blended learning approach, evaluate how learners’ experience of an online learning environment affects the blended learning approach and examine the future of online learning in a blended learning environment. Blended learning pedagogies have been recognized as a path to improve teacher’s instructional strategies for teaching using technology. Blended learning is perceived to have many advantages for teachers and students, including any-time learning, anywhere access, self-paced learning, inquiry-led learning and collaborative learning; this helps institutions to create desired instructional skills such as critical thinking in the process of learning. Blended learning as an approach to learning has gained momentum because of its widespread integration into educational organizations. METHODOLOGY: Based on the research objectives and questions of the study, the study will make use of the qualitative research approach. The rationale behind the selection of this research approach is that participants are able to make sense of their situations and appreciate their construction of knowledge and understanding because the methods focus on how people understand and interpret their experiences. A case study research design is adopted to explore the situation under investigation. The target population for the study will consist of selected students from selected universities. A simple random sampling technique will be used to select the targeted population. The data collection instrument that will be adopted for this study will be questions that will serve as an interview guide. An interview guide is a set of questions that an interviewer asks when interviewing respondents. Responses from the in-depth interview will be transcribed into word and analyzed under themes. Ethical issues to be catered for in this study include the right to privacy, voluntary participation, and no harm to participants, and confidentiality. INDICATORS OF THE MAJOR FINDINGS: It is suitable for the study to find out that online learning encourages timely feedback from teachers or that online learning tools are okay to use without issues. Most of the communication with the teacher can be done through emails and text messages. It is again suitable for sampled respondents to prefer online learning because there are few or no distractions. Learners can have access to technology to do other activities to support their learning”. There are, again, enough and enhanced learning materials available online. CONCLUSION: Unlike the previous research works focusing on the strengths and weaknesses of blended learning, the present study aims at the respective roles of its two modalities, as well as their interdependencies.

Keywords: online learning, blended learning, technologies, teaching methods

Procedia PDF Downloads 86
208 Different Types of Bismuth Selenide Nanostructures for Targeted Applications: Synthesis and Properties

Authors: Jana Andzane, Gunta Kunakova, Margarita Baitimirova, Mikelis Marnauza, Floriana Lombardi, Donats Erts

Abstract:

Bismuth selenide (Bi₂Se₃) is known as a narrow band gap semiconductor with pronounced thermoelectric (TE) and topological insulator (TI) properties. Unique TI properties offer exciting possibilities for fundamental research as observing the exciton condensate and Majorana fermions, as well as practical application in spintronic and quantum information. In turn, TE properties of this material can be applied for wide range of thermoelectric applications, as well as for broadband photodetectors and near-infrared sensors. Nanostructuring of this material results in improvement of TI properties due to suppression of the bulk conductivity, and enhancement of TE properties because of increased phonon scattering at the nanoscale grains and interfaces. Regarding TE properties, crystallographic growth direction, as well as orientation of the nanostructures relative to the growth substrate, play significant role in improvement of TE performance of nanostructured material. For instance, Bi₂Se₃ layers consisting of randomly oriented nanostructures and/or of combination of them with planar nanostructures show significantly enhanced in comparison with bulk and only planar Bi₂Se₃ nanostructures TE properties. In this work, a catalyst-free vapour-solid deposition technique was applied for controlled obtaining of different types of Bi₂Se₃ nanostructures and continuous nanostructured layers for targeted applications. For example, separated Bi₂Se₃ nanoplates, nanobelts and nanowires can be used for investigations of TI properties; consisting from merged planar and/or randomly oriented nanostructures Bi₂Se₃ layers are useful for applications in heat-to-power conversion devices and infrared detectors. The vapour-solid deposition was carried out using quartz tube furnace (MTI Corp), equipped with an inert gas supply and pressure/temperature control system. Bi₂Se₃ nanostructures/nanostructured layers of desired type were obtained by adjustment of synthesis parameters (process temperature, deposition time, pressure, carrier gas flow) and selection of deposition substrate (glass, quartz, mica, indium-tin-oxide, graphene and carbon nanotubes). Morphology, structure and composition of obtained Bi₂Se₃ nanostructures and nanostructured layers were inspected using SEM, AFM, EDX and HRTEM techniques, as well as home-build experimental setup for thermoelectric measurements. It was found that introducing of temporary carrier gas flow into the process tube during the synthesis and deposition substrate choice significantly influence nanostructures formation mechanism. Electrical, thermoelectric, and topological insulator properties of different types of deposited Bi₂Se₃ nanostructures and nanostructured coatings are characterized as a function of thickness and discussed.

Keywords: bismuth seleinde, nanostructures, topological insulator, vapour-solid deposition

Procedia PDF Downloads 231
207 Qualitative Characterization of Proteins in Common and Quality Protein Maize Corn by Mass Spectrometry

Authors: Benito Minjarez, Jesse Haramati, Yury Rodriguez-Yanez, Florencio Recendiz-Hurtado, Juan-Pedro Luna-Arias, Salvador Mena-Munguia

Abstract:

During the last decades, the world has experienced a rapid industrialization and an expanding economy favoring a demographic boom. As a consequence, countries around the world have focused on developing new strategies related to the production of different farm products in order to meet future demands. Consequently, different strategies have been developed seeking to improve the major food products for both humans and livestock. Corn, after wheat and rice, is the third most important crop globally and is the primary food source for both humans and livestock in many regions around the globe. In addition, maize (Zea mays) is an important source of protein accounting for up to 60% of the daily human protein supply. Generally, many of the cereal grains have proteins with relatively low nutritional value, when they are compared with proteins from meat. In the case of corn, much of the protein is found in the endosperm (75 to 85%) and is deficient in two essential amino acids, lysine, and tryptophan. This deficiency results in an imbalance of amino acids and low protein content; normal maize varieties have less than half of the recommended amino acids for human nutrition. In addition, studies have shown that this deficiency has been associated with symptoms of growth impairment, anemia, hypoproteinemia, and fatty liver. Due to the fact that most of the presently available maize varieties do not contain the quality and quantity of proteins necessary for a balanced diet, different countries have focused on the research of quality protein maize (QPM). Researchers have characterized QPM noting that these varieties may contain between 70 to 100% more residues of the amino acids essential for animal and human nutrition, lysine, and tryptophan, than common corn. Several countries in Africa, Latin America, as well as China, have incorporated QPM in their agricultural development plan. Large parts of these countries have chosen a specific QPM variety based on their local needs and climate. Reviews have described the breeding methods of maize and have revealed the lack of studies on genetic and proteomic diversity of proteins in QPM varieties, and their genetic relationships with normal maize varieties. Therefore, molecular marker identification using tools such as mass spectrometry may accelerate the selection of plants that carry the desired proteins with high lysine and tryptophan concentration. To date, QPM maize lines have played a very important role in alleviating the malnutrition, and better characterization of these lines would provide a valuable nutritional enhancement for use in the resource-poor regions of the world. Thus, the objectives of this study were to identify proteins in QPM maize in comparison with a common maize line as a control.

Keywords: corn, mass spectrometry, QPM, tryptophan

Procedia PDF Downloads 288
206 Combating Corruption to Enhance Learner Academic Achievement: A Qualitative Study of Zimbabwean Public Secondary Schools

Authors: Onesmus Nyaude

Abstract:

The aim of the study was to investigate participants’ views on how corruption can be combated to enhance learner academic achievement. The study was undertaken on three select public secondary institutions in Zimbabwe. This study also focuses on exploring the various views of educators; parents and the learners on the role played by corruption in perpetuating the seemingly existing learner academic achievement disparities in various educational institutions. The study further interrogates and examines the nexus between the prevalence of corruption in schools and the subsequent influence on the academic achievement of learners. Corruption is considered a form of social injustice; hence in Zimbabwe, the general consensus is that it is perceived rife to the extent that it is overtaking the traditional factors that contributed to the poor academic achievement of learners. Coupled to this, have been the issue of gross abuse of power and some malpractices emanating from concealment of essential and official transactions in the conduct of business. Through proposing robust anti-corruption mechanisms, teaching and learning resources poured in schools would be put into good use. This would prevent the unlawful diversion and misappropriation of the resources in question which has always been the culture. This study is of paramount significance to curriculum planners, teachers, parents, and learners. The study was informed by the interpretive paradigm; thus qualitative research approaches were used. Both probability and non-probability sampling techniques were adopted in ‘site and participants’ selection. A representative sample of (150) participants was used. The study found that the majority of the participants perceived corruption as a social problem and a human right threat affecting the quality of teaching and learning processes in the education sector. It was established that corruption prevalence within institutions is as a result of the perpetual weakening of ethical values and other variables linked to upholding of ‘Ubuntu’ among general citizenry. It was further established that greediness and weak systems are major causes of rampant corruption within institutions of higher learning and are manifesting through abuse of power, bribery, misappropriation and embezzlement of material and financial resources. Therefore, there is great need to collectively address the problem of corruption in educational institutions and society at large. The study additionally concludes that successful combating of corruption will promote successful moral development of students as well as safeguarding their human rights entitlements. The study recommends the adoption of principles of good corporate governance within educational institutions in order to successfully curb corruption. The study further recommends the intensification of interventionist strategies and strengthening of systems in educational institutions as well as regular audits to overcome the problem associated with rampant corruption cases.

Keywords: academic achievement, combating, corruption, good corporate governance, qualitative study

Procedia PDF Downloads 243
205 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel, S. P. Dave, M. V Shah

Abstract:

Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.

Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity

Procedia PDF Downloads 68
204 Urban Park Characteristics Defining Avian Community Structure

Authors: Deepti Kumari, Upamanyu Hore

Abstract:

Cities are an example of a human-modified environment with few fragments of urban green spaces, which are widely considered for urban biodiversity. The study aims to address the avifaunal diversity in urban parks based on the park size and their urbanization intensity. Also, understanding the key factors affecting species composition and structure as birds are a good indicator of a healthy ecosystem, and they are sensitive to changes in the environment. A 50 m-long line-transect method is used to survey birds in 39 urban parks in Delhi, India. Habitat variables, including vegetation (percentage of non-native trees, percentage of native trees, top canopy cover, sub-canopy cover, diameter at breast height, ground vegetation cover, shrub height) were measured using the quadrat method along the transect, and disturbance variables (distance from water, distance from road, distance from settlement, park area, visitor rate, and urbanization intensity) were measured using ArcGIS and google earth. We analyzed species data for diversity and richness. We explored the relation of species diversity and richness to habitat variables using the multi-model inference approach. Diversity and richness are found significant in different park sizes and their urbanization intensity. Medium size park supports more diversity, whereas large size park has more richness. However, diversity and richness both declined with increasing urbanization intensity. The result of CCA revealed that species composition in urban parks was positively associated with tree diameter at breast height and distance from the settlement. On the model selection approach, disturbance variables, especially distance from road, urbanization intensity, and visitors are the best predictors for the species richness of birds in urban parks. In comparison, multiple regression analysis between habitat variables and bird diversity suggested that native tree species in the park may explain the diversity pattern of birds in urban parks. Feeding guilds such as insectivores, omnivores, carnivores, granivores, and frugivores showed a significant relation with vegetation variables, while carnivores and scavenger bird species mainly responded with disturbance variables. The study highlights the importance of park size in urban areas and their urbanization intensity. It also indicates that distance from the settlement, distance from the road, urbanization intensity, visitors, diameter at breast height, and native tree species can be important determining factors for bird richness and diversity in urban parks. The study also concludes that the response of feeding guilds to vegetation and disturbance in urban parks varies. Therefore, we recommend that park size and surrounding urban matrix should be considered in order to increase bird diversity and richness in urban areas for designing and planning.

Keywords: diversity, feeding guild, urban park, urbanization intensity

Procedia PDF Downloads 121
203 Exploring the Social Health and Well-Being Factors of Hydraulic Fracturing

Authors: S. Grinnell

Abstract:

A PhD Research Project exploring the Social Health and Well-Being Impacts associated with Hydraulic Fracturing, with an aim to produce a Best Practice Support Guidance for those anticipating dealing with planning applications or submitting Environmental Impact Assessments (EIAs). Amid a possible global energy crisis, founded upon a number of factors, including unstable political situations, increasing world population growth, people living longer, it is perhaps inevitable that Hydraulic Fracturing (commonly referred to as ‘fracking’) will become a major player within the global long-term energy and sustainability agenda. As there is currently no best practice guidance for governing bodies the Best Practice Support Document will be targeted at a number of audiences including, consultants undertaking EIAs, Planning Officers, those commissioning EIAs Industry and interested public stakeholders. It will offer a robust, evidence-based criteria and recommendations which provide a clear narrative and consistent and shared approach to the language used along with containing an understanding of the issues identified. It is proposed that the Best Practice Support Document will also support the mitigation of health impacts identified. The Best Practice Support Document will support the newly amended Environmental Impact Assessment Directive (2011/92/EU), to be transposed into UK law by 2017. A significant amendment introduced focuses on, ‘higher level of protection to the environment and health.’ Methodology: A qualitative research methods approach is being taken with this research. It will have a number of key stages. A literature review has been undertaken and been critically reviewed and analysed. This was followed by a descriptive content analysis of a selection of international and national policies, programmes and strategies along with published Environmental Impact Assessments and associated planning guidance. In terms of data collection, a number of stakeholders were interviewed as well as a number of focus groups of local community groups potentially affected by fracking. These were determined from across the UK. A theme analysis of all the data collected and the literature review will be undertaken, using NVivo. Best Practice Supporting Document will be developed based on the outcomes of the analysis and be tested and piloted in the professional fields, before a live launch. Concluding statement: Whilst fracking is not a new concept, the technology is now driving a new force behind the use of this engineering to supply fuels. A number of countries have pledged moratoria on fracking until further investigation from the impacts on health have been explored, whilst other countries including Poland and the UK are pushing to support the use of fracking. If this should be the case, it will be important that the public’s concerns, perceptions, fears and objections regarding the wider social health and well-being impacts are considered along with the more traditional biomedical health impacts.

Keywords: fracking, hydraulic fracturing, socio-economic health, well-being

Procedia PDF Downloads 243
202 Ecological and Historical Components of the Cultural Code of the City of Florence as Part of the Edutainment Project Velonotte International

Authors: Natalia Zhabo, Sergey Nikitin, Marina Avdonina, Mariya Nikitina

Abstract:

The analysis of the activities of one of the events of the international educational and entertainment project Velonotte is provided: an evening bicycle tour with children around Florence. The aim of the project is to develop methods and techniques for increasing the sensitivity of the cycling participants and listeners of the radio broadcasts to the treasures of the national heritage, in this case, to the historical layers of the city and the ecology of the Renaissance epoch. The block of educational tasks is considered, and the issues of preserving the identity of the city are discussed. Methods. The Florentine event was prepared during more than a year. First of all the creative team selected such events of the history of the city which seem to be important for revealing the specifics of the city, its spirit - from antiquity to our days – including the forums of Internet with broad public opinion. Then a route (seven kilometers) was developed, which was proposed to the authorities and organizations of the city. The selection of speakers was conducted according to several criteria: they should be authors of books, famous scientists, connoisseurs in a certain sphere (toponymy, history of urban gardens, art history), capable and willing to talk with participants directly at the points of stops, in order to make a dialogue and so that performances could be organized with their participation. The music was chosen for each part of the itinerary to prepare the audience emotionally. Cards for coloring with images of the main content of each stop were created for children. A site was done to inform the participants and to keep photos, videos and the audio files with speakers’ speech afterward. Results: Held in April 2017, the event was dedicated to the 640th Anniversary of the Filippo Brunelleschi, Florentine architect, and to the 190th anniversary of the publication of Florence guide by Stendhal. It was supported by City of Florence and Florence Bike Festival. Florence was explored to transfer traditional elements of culture, sometimes unfairly forgotten from ancient times to Brunelleschi and Michelangelo and Tschaikovsky and David Bowie with lectures by professors of Universities. Memorable art boards were installed in public spaces. Elements of the cultural code are deeply internalized in the minds of the townspeople, the perception of the city in everyday life and human communication is comparable to such fundamental concepts of the self-awareness of the townspeople as mental comfort and the level of happiness. The format of a fun and playful walk with the ICT support gives new opportunities for enriching the city's cultural code of each citizen with new components, associations, connotations.

Keywords: edutainment, cultural code, cycling, sensitization Florence

Procedia PDF Downloads 219
201 Geospatial Technologies in Support of Civic Engagement and Cultural Heritage: Lessons Learned from Three Participatory Planning Workshops for Involving Local Communities in the Development of Sustainable Tourism Practices in Latiano, Brindisi

Authors: Mark Opmeer

Abstract:

The fruitful relationship between cultural heritage and digital technology is evident. Due to the development of user-friendly software, an increasing amount of heritage scholars use ict for their research activities. As a result, the implementation of information technology for heritage planning has become a research objective in itself. During the last decades, we have witnessed a growing debate and literature about the importance of computer technologies for the field of cultural heritage and ecotourism. Indeed, implementing digital technology in support of these domains can be very fruitful for one’s research practice. However, due to the rapid development of new software scholars may find it challenging to use these innovations in an appropriate way. As such, this contribution seeks to explore the interplay between geospatial technologies (geo-ict), civic engagement and cultural heritage and tourism. In this article, we discuss our findings on the use of geo-ict in support of civic participation, cultural heritage and sustainable tourism development in the southern Italian district of Brindisi. In the city of Latiano, three workshops were organized that involved local members of the community to distinguish and discuss interesting points of interests (POI’s) which represent the cultural significance and identity of the area. During the first workshop, a so called mappa della comunità was created on a touch table with collaborative mapping software, that allowed the participators to highlight potential destinations for tourist purposes. Furthermore, two heritage-based itineraries along a selection of identified POI’s was created to make the region attractive for recreants and tourists. These heritage-based itineraries reflect the communities’ ideas about the cultural identity of the region. Both trails were subsequently implemented in a dedicated mobile application (app) and was evaluated using a mixed-method approach with the members of the community during the second workshop. In the final workshop, the findings of the collaboration, the heritage trails and the app was evaluated with all participants. Based on our conclusions, we argue that geospatial technologies have a significant potential for involving local communities in heritage planning and tourism development. The participants of the workshops found it increasingly engaging to share their ideas and knowledge using the digital map of the touch table. Secondly, the use of a mobile application as instrument to test the heritage-based itineraries in the field was broadly considered as fun and beneficial for enhancing community awareness and participation in local heritage. The app furthermore stimulated the communities’ awareness of the added value of geospatial technologies for sustainable tourism development in the area. We conclude this article with a number of recommendations in order to provide a best practice for organizing heritage workshops with similar objectives.

Keywords: civic engagement, geospatial technologies, tourism development, cultural heritage

Procedia PDF Downloads 287
200 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 207
199 Effectiveness, Safety, and Tolerability Profile of Stribild® in HIV-1-infected Patients in the Clinical Setting

Authors: Heiko Jessen, Laura Tanus, Slobodan Ruzicic

Abstract:

Objectives: The efficacy of Stribild®, an integrase strand transfer inhibitor (INSTI) -based STR, has been evaluated in randomized clinical trials and it has demonstrated durable capability in terms of achieving sustained suppression of HIV-1 RNA-levels. However, differences in monitoring frequency, existing selection bias and profile of patients enrolled in the trials, may all result in divergent efficacy of this regimen in routine clinical settings. The aim of this study was to assess the virologic outcomes, safety and tolerability profile of Stribild® in a routine clinical setting. Methods: This was a retrospective monocentric analysis on HIV-1-infected patients, who started with or were switched to Stribild®. Virological failure (VF) was defined as confirmed HIV-RNA>50 copies/ml. The minimum time of follow-up was 24 weeks. The percentage of patients remaining free of therapeutic failure was estimated using the time-to-loss-of-virologic-response (TLOVR) algorithm, by intent-to-treat analysis. Results: We analyzed the data of 197 patients (56 ART-naïve and 141 treatment-experienced patients), who fulfilled the inclusion criteria. Majority (95.9%) of patients were male. The median time of HIV-infection at baseline was 2 months in treatment-naïve and 70 months in treatment-experienced patients. Median time [IQR] under ART in treatment-experienced patients was 37 months. Among the treatment-experienced patients 27.0% had already been treated with a regimen consisting of two NRTIs and one INSTI, whereas 18.4% of them experienced a VF. The median time [IQR] of virological suppression prior to therapy with Stribild® in the treatment-experienced patients was 10 months [0-27]. At the end of follow-up (median 33 months), 87.3% (95% CI, 83.5-91.2) of treatment-naïve and 80.3% (95% CI, 75.8-84.8) of treatment-experienced patients remained free of therapeutic failure. Considering only treatment-experienced patients with baseline VL<50 copies/ml, 83.0% (95% CI, 78.5-87.5) remained free of therapeutic failure. A total of 17 patients stopped treatment with Stribild®, 5.4% (3/56) of them were treatment-naïve and 9.9% (14/141) were treatment-experienced patients. The Stribild® therapy was discontinued in 2 (1.0%) because of VF, loss to follow-up in 4 (2.0%), and drug-drug interactions in 2 (1.0%) patients. Adverse events were in 7 (3.6%) patients the reason to switch from therapy with Stribild® and further 2 (1.0%) patients decided personally to switch. The most frequently observed adverse events were gastrointestinal side effects (20.0%), headache (8%), rash events (7%) and dizziness (6%). In two patients we observed an emergence of novel resistances in integrase-gene. The N155H evolved in one patient and resulted in VF. In another patient S119R evolved either during or shortly upon switch from therapy with Stribild®. In one further patient with VF two novel mutations in the RT-gene were observed when compared to historical genotypic test result (V106I/M and M184V), whereby it is not clear whether they evolved during or already before the switch to Stribild®. Conclusions: Effectiveness of Stribild® for treatment-naïve patients was consistent with data obtained in clinical trials. The safety and tolerability profile as well as resistance development confirmed clinical efficacy of Stribild® in a daily practice setting.

Keywords: ART, HIV, integrase inhibitor, stribild

Procedia PDF Downloads 285
198 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.

Keywords: HbA1C, T2DM, SBP, FBS

Procedia PDF Downloads 11
197 Microalgae Technology for Nutraceuticals

Authors: Weixing Tan

Abstract:

Production of nutraceuticals from microalgae—a virtually untapped natural phyto-based source of which there are 200,000 to 1,000,000 species—offers a sustainable and healthy alternative to conventionally sourced nutraceuticals for the market. Microalgae can be grown organically using only natural sunlight, water and nutrients at an extremely fast rate, e.g. 10-100 times more efficiently than crops or trees. However, the commercial success of microalgae products at scale remains limited largely due to the lack of economically viable technologies. There are two major microalgae production systems or technologies currently available: 1) the open system as represented by open pond technology and 2) the closed system such as photobioreactors (PBR). Each carries its own unique features and challenges. Although an open system requires a lower initial capital investment relative to a PBR, it conveys many unavoidable drawbacks; for example, much lower productivity, difficulty in contamination control/cleaning, inconsistent product quality, inconvenience in automation, restriction in location selection, and unsuitability for cold areas – all directly linked to the system openness and flat underground design. On the other hand, a PBR system has characteristics almost entirely opposite to the open system, such as higher initial capital investment, better productivity, better contamination and environmental control, wider suitability in different climates, ease in automation, higher and consistent product quality, higher energy demand (particularly if using artificial lights), and variable operational expenses if not automated. Although closed systems like PBRs are not highly competitive yet in current nutraceutical supply market, technological advances can be made, in particular for the PBR technology, to narrow the gap significantly. One example is a readily scalable P2P Microalgae PBR Technology at Grande Prairie Regional College, Canada, developed over 11 years considering return on investment (ROI) for key production processes. The P2P PBR system is approaching economic viability at a pre-commercial stage due to five ROI-integrated major components. They include: (1) optimum use of free sunlight through attenuation (patented); (2) simple, economical, and chemical-free harvesting (patent ready to file); (3) optimum pH- and nutrient-balanced culture medium (published), (4) reliable water and nutrient recycling system (trade secret); and (5) low-cost automated system design (trade secret). These innovations have allowed P2P Microalgae Technology to increase daily yield to 106 g/m2/day of Chlorella vulgaris, which contains 50% proteins and 2-3% omega-3. Based on the current market prices and scale-up factors, this P2P PBR system presents as a promising microalgae technology for market competitive nutraceutical supply.

Keywords: microalgae technology, nutraceuticals, open pond, photobioreactor PBR, return on investment ROI, technological advances

Procedia PDF Downloads 157
196 Bank Failures: A Question of Leadership

Authors: Alison L. Miles

Abstract:

Almost all major financial institutions in the world suffered losses due to the financial crisis of 2007, but the extent varied widely. The causes of the crash of 2007 are well documented and predominately focus on the role and complexity of the financial markets. The dominant theme of the literature suggests the causes of the crash were a combination of globalization, financial sector innovation, moribund regulation and short termism. While these arguments are undoubtedly true, they do not tell the whole story. A key weakness in the current analysis is the lack of consideration of those leading the banks pre and during times of crisis. This purpose of this study is to examine the possible link between the leadership styles and characteristics of the CEO, CFO and chairman and the financial institutions that failed or needed recapitalization. As such, it contributes to the literature and debate on international financial crises and systemic risk and also to the debate on risk management and regulatory reform in the banking sector. In order to first test the proposition (p1) that there are prevalent leadership characteristics or traits in financial institutions, an initial study was conducted using a sample of the top 65 largest global banks and financial institutions according to the Banker Top 1000 banks 2014. Secondary data from publically available and official documents, annual reports, treasury and parliamentary reports together with a selection of press articles and analyst meeting transcripts was collected longitudinally from the period 1998 to 2013. A computer aided key word search was used in order to identify the leadership styles and characteristics of the chairman, CEO and CFO. The results were then compared with the leadership models to form a picture of leadership in the sector during the research period. As this resulted in separate results that needed combining, SPSS data editor was used to aggregate the results across the studies using the variables ‘leadership style’ and ‘company financial performance’ together with the size of the company. In order to test the proposition (p2) that there was a prevalent leadership style in the banks that failed and the proposition (P3) that this was different to those that did not, further quantitative analysis was carried out on the leadership styles of the chair, CEO and CFO of banks that needed recapitalization, were taken over, or required government bail-out assistance during 2007-8. These included: Lehman Bros, Merrill Lynch, Royal Bank of Scotland, HBOS, Barclays, Northern Rock, Fortis and Allied Irish. The findings show that although regulatory reform has been a key mechanism of control of behavior in the banking sector, consideration of the leadership characteristics of those running the board are a key factor. They add weight to the argument that if each crisis is met with the same pattern of popular fury with the financier, increased regulation, followed by back to business as usual, the cycle of failure will always be repeated and show that through a different lens, new paradigms can be formed and future clashes avoided.

Keywords: banking, financial crisis, leadership, risk

Procedia PDF Downloads 318