Search results for: energy problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14577

Search results for: energy problem

357 Research on Land Use Pattern and Employment-Housing Space of Coastal Industrial Town Based on the Investigation of Liaoning Province, China

Authors: Fei Chen, Wei Lu, Jun Cai

Abstract:

During the Twelve Five period, China promulgated industrial policies promoting the relocation of energy-intensive industries to coastal areas in order to utilize marine shipping resources. Consequently, some major state-owned steel and gas enterprises have relocated and resulted in a large-scale coastal area development. However, some land may have been over-exploited with seamless coastline projects. To balance between employment and housing, new industrial coastal towns were constructed to support the industrial-led development. In this paper, we adopt a case-study approach to closely examine the development of several new industrial coastal towns of Liaoning Province situated in the Bohai Bay area, which is currently under rapid economic growth. Our investigations reflect the common phenomenon of long distance commuting and a massive amount of vacant residences. More specifically, large plant relocation caused hundreds of kilometers of daily commute and enterprises had to provide housing subsidies and education incentives to motivate employees to relocate to coastal areas. Nonetheless, many employees still refuse to relocate due to job stability, diverse needs of family members and access to convenient services. These employees averaged 4 hours of commute daily and some who lived further had to reside in temporary industrial housing units and subject to long-term family separation. As a result, only a small portion of employees purchase new coastal residences but mostly for investment and retirement purposes, leading to massive vacancy and ghost-town phenomenon. In contrast to the low demand, coastal areas tend to develop large amount of residences prior to industrial relocation, which may be directly related to local government finances. Some local governments have sold residential land to developers to general revenue to support the subsequent industrial development. Subject to the strong preference of ocean-view, residential housing developers tend to select coast-line land to construct new residential towns, which further reduces the access of marine resources for major industrial enterprises. This violates the original intent of developing industrial coastal towns and drastically limits the availability of marine resources. Lastly, we analyze the co-existence of over-exploiting residential areas and massive vacancies in reference to the demand and supply of land, as well as the demand of residential housing units with the choice criteria of enterprise employees.

Keywords: coastal industry town, commuter traffic, employment-housing space, outer suburb industrial area

Procedia PDF Downloads 197
356 Catchment Nutrient Balancing Approach to Improve River Water Quality: A Case Study at the River Petteril, Cumbria, United Kingdom

Authors: Nalika S. Rajapaksha, James Airton, Amina Aboobakar, Nick Chappell, Andy Dyer

Abstract:

Nutrient pollution and their impact on water quality is a key concern in England. Many water quality issues originate from multiple sources of pollution spread across the catchment. The river water quality in England has improved since 1990s and wastewater effluent discharges into rivers now contain less phosphorus than in the past. However, excess phosphorus is still recognised as the prevailing issue for rivers failing Water Framework Directive (WFD) good ecological status. To achieve WFD Phosphorus objectives, Wastewater Treatment Works (WwTW) permit limits are becoming increasingly stringent. Nevertheless, in some rural catchments, the apportionment of Phosphorus pollution can be greater from agricultural runoff and other sources such as septic tanks. Therefore, the challenge of meeting the requirements of watercourses to deliver WFD objectives often goes beyond water company activities, providing significant opportunities to co-deliver activities in wider catchments to reduce nutrient load at source. The aim of this study was to apply the United Utilities' Catchment Systems Thinking (CaST) strategy and pilot an innovative permitting approach - Catchment Nutrient Balancing (CNB) in a rural catchment in Cumbria (the River Petteril) in collaboration with the regulator and others to achieve WFD objectives and multiple benefits. The study area is mainly agricultural land, predominantly livestock farms. The local ecology is impacted by significant nutrient inputs which require intervention to meet WFD obligations. There are a range of Phosphorus inputs into the river, including discharges from wastewater assets but also significantly from agricultural contributions. Solely focusing on the WwTW discharges would not have resolved the problem hence in order to address this issue effectively, a CNB trial was initiated at a small WwTW, targeting the removal of a total of 150kg of Phosphorus load, of which 13kg were to be reduced through the use of catchment interventions. Various catchment interventions were implemented across selected farms in the upstream of the catchment and also an innovative polonite reactive filter media was implemented at the WwTW as an alternative to traditional Phosphorus treatment methods. During the 3 years of this trial, the impact of the interventions in the catchment and the treatment works were monitored. In 2020 and 2022, it respectively achieved a 69% and 63% reduction in the phosphorus level in the catchment against the initial reduction target of 9%. Phosphorus treatment at the WwTW had a significant impact on overall load reduction. The wider catchment impact, however, was seven times greater than the initial target when wider catchment interventions were also established. While it is unlikely that all the Phosphorus load reduction was delivered exclusively from the interventions implemented though this project, this trial evidenced the enhanced benefits that can be achieved with an integrated approach, that engages all sources of pollution within the catchment - rather than focusing on a one-size-fits-all solution. Primarily, the CNB approach and the act of collaboratively engaging others, particularly the agriculture sector is likely to yield improved farm and land management performance and better compliance, which can lead to improved river quality as well as wider benefits.

Keywords: agriculture, catchment nutrient balancing, phosphorus pollution, water quality, wastewater

Procedia PDF Downloads 44
355 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint

Authors: Juliane Spaak

Abstract:

A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.

Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient

Procedia PDF Downloads 53
354 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 96
353 Valorisation of Food Waste Residue into Sustainable Bioproducts

Authors: Krishmali N. Ekanayake, Brendan J. Holland, Colin J. Barrow, Rick Wood

Abstract:

Globally, more than one-third of all food produced is lost or wasted, equating to 1.3 billion tonnes per year. Around 31.2 million tonnes of food waste are generated across the production, supply, and consumption chain in Australia. Generally, the food waste management processes adopt environmental-friendly and more sustainable approaches such as composting, anerobic digestion and energy implemented technologies. However, unavoidable, and non-recyclable food waste ends up as landfilling and incineration that involve many undesirable impacts and challenges on the environment. A biorefinery approach contributes to a waste-minimising circular economy by converting food and other organic biomass waste into valuable outputs, including feeds, nutrition, fertilisers, and biomaterials. As a solution, Green Eco Technologies has developed a food waste treatment process using WasteMaster system. The system uses charged oxygen and moderate temperatures to convert food waste, without bacteria, additives, or water, into a virtually odour-free, much reduced quantity of reusable residual material. In the context of a biorefinery, the WasteMaster dries and mills food waste into a form suitable for storage or downstream extraction/separation/concentration to create products. The focus of the study is to determine the nutritional composition of WasteMaster processed residue to potential develop aquafeed ingredients. The global aquafeed industry is projected to reach a high value market in future, which has shown high demand for the aquafeed products. Therefore, food waste can be utilized for aquaculture feed development by reducing landfill. This framework will lessen the requirement of raw crops cultivation for aquafeed development and reduce the aquaculture footprint. In the present study, the nutritional elements of processed residue are consistent with the input food waste type, which has shown that the WasteMaster is not affecting the expected nutritional distribution. The macronutrient retention values of protein, lipid, and nitrogen free extract (NFE) are detected >85%, >80%, and >95% respectively. The sensitive food components including omega 3 and omega 6 fatty acids, amino acids, and phenolic compounds have been found intact in each residue material. Preliminary analysis suggests a price comparability with current aquafeed ingredient cost making the economic feasibility. The results suggest high potentiality of aquafeed development as 5 to 10% of the ingredients to replace/partially substitute other less sustainable ingredients across biorefinery setting. Our aim is to improve the sustainability of aquaculture and reduce the environmental impacts of food waste.

Keywords: biorefinery, ffood waste residue, input, wasteMaster

Procedia PDF Downloads 36
352 Deconstructing Reintegration Services for Survivors of Human Trafficking: A Feminist Analysis of Australian and Thai Government and Non-Government Responses

Authors: Jessica J. Gillies

Abstract:

Awareness of the tragedy that is human trafficking has increased exponentially over the past two decades. The four pillars widely recognised as global solutions to the problem are prevention, prosecution, protection, and partnership between government and non-government organisations. While ‘sex-trafficking’ initially received major attention, this focus has shifted to other industries that conceal broader experiences of exploitation. However, within the regions of focus for this study, namely Australia and Thailand, trafficking for the purpose of sexual exploitation remains the commonly uncovered narrative of criminal justice investigations. In these regions anti-trafficking action is characterised by government-led prevention and prosecution efforts; whereas protection and reintegration practices have received criticism. Typically, non-government organisations straddle the critical chasm between policy and practice; therefore, they are perfectly positioned to contribute valuable experiential knowledge toward understanding how both sectors can support survivors in the post-trafficking experience. The aim of this research is to inform improved partnerships throughout government and non-government post-trafficking services by illuminating gaps in protection and reintegration initiatives. This research will explore government and non-government responses to human trafficking in Thailand and Australia, in order to understand how meaning is constructed in this context and how the construction of meaning effects survivors in the post-trafficking experience. A qualitative, three-stage methodology was adopted for this study. The initial stage of enquiry consisted of a discursive analysis, in order to deconstruct the broader discourses surrounding human trafficking. The data included empirical papers, grey literature such as publicly available government and non-government reports, and anti-trafficking policy documents. The second and third stages of enquiry will attempt to further explore the findings of the discourse analysis and will focus more specifically on protection and reintegration in Australia and Thailand. Stages two and three will incorporate process observations in government and non-government survivor support services, and semi-structured interviews with employees and volunteers within these settings. Two key findings emerged from the discursive analysis. The first exposed conflicting feminist arguments embedded throughout anti-trafficking discourse. Informed by conflicting feminist discourses on sex-work, a discursive relationship has been constructed between sex-industry policy and anti-trafficking policy. In response to this finding, data emerging from the process observations and semi-structured interviews will be interpreted using a feminist theoretical framework. The second finding progresses from the construction in the first. The discursive construction of sex-trafficking appears to have had influence over perceptions of the legitimacy of survivors, and therefore the support they receive in the post-trafficking experience. For example; women who willingly migrate for employment in the sex-industry, and on arrival are faced with exploitative conditions, are not perceived to be deserving of the same support as a woman who is not coerced, but rather physically forced, into such circumstances, yet both meet the criteria for a victim of human trafficking. The forthcoming study is intended to contribute toward building knowledge and understanding around the implications of the construction of legitimacy; and contextualise this in reference to government led protection and reintegration support services for survivors in the post-trafficking experience.

Keywords: Australia, government, human trafficking, non-government, reintegration, Thailand

Procedia PDF Downloads 89
351 The Photovoltaic Panel at End of Life: Experimental Study of Metals Release

Authors: M. Tammaro, S. Manzo, J. Rimauro, A. Salluzzo, S. Schiavo

Abstract:

The solar photovoltaic (PV) modules are considered to have a negligible environmental impact compared to the fossil energy. Therefore also the waste management and the corresponding potential environmental hazard needs to be considered. The case of the photovoltaic panel is unique because the time lag from the manufacturing to the decommissioning as waste usually takes 25-30 years. Then the environmental hazard associated with end life of PV panels has been largely related to their metal contents. The principal concern regards the presence of heavy metals as Cd in thin film (TF) modules or Pb and Cr in crystalline silicon (c-Si) panels. At the end of life of PV panels, these dangerous substances could be released in the environment, if special requirements for their disposal are not adopted. Nevertheless, in literature, only a few experimental study about metal emissions from silicon crystalline/thin film panels and the corresponding environmental effect are present. As part of a study funded by the Italian national consortium for the waste collection and recycling (COBAT), the present work was aimed to analyze experimentally the potential release into the environment of hazardous elements, particularly metals, from PV waste. In this paper, for the first time, eighteen releasable metals a large number of photovoltaic panels, by c-Si and TF, manufactured in the last 30 years, together with the environmental effects by a battery of ecotoxicological tests, were investigated. Leaching tests are conducted on the crushed samples of PV module. The test is conducted according to Italian and European Standard procedure for hazard assessment of the granular waste and of the sludge. The sample material is shaken for 24 hours in HDPE bottles with an overhead mixer Rotax 6.8 VELP at indoor temperature and using pure water (18 MΩ resistivity) as leaching solution. The liquid-to-solid ratio was 10 (L/S=10, i.e. 10 liters of water per kg of solid). The ecotoxicological tests were performed in the subsequent 24 hours. A battery of toxicity test with bacteria (Vibrio fisheri), algae (Pseudochirneriella subcapitata) and crustacea (Daphnia magna) was carried out on PV panel leachates obtained as previously described and immediately stored in dark and at 4°C until testing (in the next 24 hours). For understand the actual pollution load, a comparison with the current European and Italian benchmark limits was performed. The trend of leachable metal amount from panels in relation to manufacturing years was then highlighted in order to assess the environmental sustainability of PV technology over time. The experimental results were very heterogeneous and show that the photovoltaic panels could represent an environmental hazard. The experimental results showed that the amounts of some hazardous metals (Pb, Cr, Cd, Ni), for c-Si and TF, exceed the law limits and they are a clear indication of the potential environmental risk of photovoltaic panels "as a waste" without a proper management.

Keywords: photovoltaic panel, environment, ecotoxicity, metals emission

Procedia PDF Downloads 248
350 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 38
349 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence

Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti

Abstract:

In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.

Keywords: collective intelligence, data framework, destination management, smart tourism

Procedia PDF Downloads 99
348 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 46
347 Biological Significance of Long Intergenic Noncoding RNA LINC00273 in Lung Cancer Cell Metastasis

Authors: Ipsita Biswas, Arnab Sarkar, Ashikur Rahaman, Gopeswar Mukherjee, Subhrangsu Chatterjee, Shamee Bhattacharjee, Deba Prasad Mandal

Abstract:

One of the major reasons for the high mortality rate of lung cancer is the substantial delays in disease detection at late metastatic stages. It is of utmost importance to understand the detailed molecular signaling and detect the molecular markers that can be used for the early diagnosis of cancer. Several studies explored the emerging roles of long noncoding RNAs (lncRNAs) in various cancers as well as lung cancer. A long non-coding RNA LINC00273 was recently discovered to promote cancer cell migration and invasion, and its positive correlation with the pathological stages of metastasis may prove it to be a potential target for inhibiting cancer cell metastasis. Comparing real-time expression of LINC00273 in various human clinical cancer tissue samples with normal tissue samples revealed significantly higher expression in cancer tissues. This long intergenic noncoding RNA was found to be highly expressed in human liver tumor-initiating cells, human gastric adenocarcinoma AGS cell line, as well as human non-small cell lung cancer A549 cell line. SiRNA and shRNA-induced knockdown of LINC00273 in both in vitro and in vivo nude mice significantly subsided AGS and A549 cancer cell migration and invasion. LINC00273 knockdown also reduced TGF-β induced SNAIL, SLUG, VIMENTIN, ZEB1 expression, and metastasis in A549 cells. Plenty of reports have suggested the role of microRNAs of the miR200 family in reversing epithelial to mesenchymal transition (EMT) by inhibiting ZEB transcription factors. In this study, hsa-miR-200a-3p was predicted via IntaRNA-Freiburg RNA tools to be a potential target of LINC00273 with a negative free binding energy of −8.793 kcal/mol, and this interaction was verified as a confirmed target of LINC00273 by RNA pulldown, real-time PCR and luciferase assay. Mechanistically, LINC00273 accelerated TGF-β induced EMT by sponging hsa-miR-200a-3p which in turn liberated ZEB1 and promoted prometastatic functions in A549 cells in vitro as verified by real-time PCR and western blotting. The similar expression patterns of these EMT regulatory pathway molecules, viz. LINC00273, hsa-miR-200a-3p, ZEB1 and TGF-β, were also detected in various clinical samples like breast cancer tissues, oral cancer tissues, lung cancer tissues, etc. Overall, this LINC00273 mediated EMT regulatory signaling can serve as a potential therapeutic target for the prevention of lung cancer metastasis.

Keywords: epithelial to mesenchymal transition, long noncoding RNA, microRNA, non-small-cell lung carcinoma

Procedia PDF Downloads 134
346 The Effect of TiO₂ Nanoparticles on Zebrafish Embryos

Authors: Elena Maria Scalisi

Abstract:

Currently, photodegradation by nanoparticles (NPs) is a common solution for wastewater treatment. Nanoparticles are efficient for removing organic and inorganic pollutants, heavy metals from wastewater and killing microorganisms through environmentally friendly. In this context, the major representative of photocatalytic technology for industrial wastewater treatment are TiO₂ nanoparticles (TiO₂-NPs). TiO₂-NPs have a strong catalytic activity that depends to their physicochemical properties. Thanks to their small size (between 1-100 nm), nanoparticles occupy less volume, then their surface area increases. The increase in the surface-to-volume ratio results in the increase of the particle surface energy, which improve their reactivity potential. However, these unique properties represent risks to the ecosystems and organisms when unintentionally TiO₂-NPs are release into the environment and absorbed by living organisms. Several studies confirm that there is a high level of interest concerning the safety of TiO₂-NPs in the aquatic environment, furthermore, ecotoxicological tools are useful to correctly evaluate their toxicity. In the current study, we aimed to characterize potential toxic effects of TiO₂-NP suspension to zebrafish during embryo-larval stages to evaluate parameters such as survival rates, malformation, hatching, the overall length of the larvae heartbeat, and biochemical biomarkers that reflect the acute toxicity and sublethal effects of TiO₂-NPs. Zebrafish embryos were exposed to titanium dioxide nanoparticles (TiO₂-NPs at 1mg/L, 2mg/L, and 4mg/L) from fertilization to the free swimming stage (144hpf). Every day, we recorded the toxicological endpoints, moreover, immunohistochemical analysis has been performed at the end of the exposure. In particular, we have evaluate the expression of the following biomarkers: Heat Shock Protein 70 (HSP70), Poly ADP-Ribose Polymerase-1 (PARP-1), Metallothioneins (MTs). Our results have shown that hatch ability, survival, and malformation rate were not affected by TiO₂ NPs at these exposure levels. However, TiO₂-NPs caused an increase of heartbeat and reduction of body length; at the same time, TiO₂-NPs have inducted the production of ROS and the expression of oxidative stress biomarkers HSP70 and PARP-1. Hight positivity for PARP-1 at all concentration tested was observed. As regards MT, positivity was found in the expression of this biomarker in the whole body of the embryo, with the exception of the end of the tail. Metallothioneins (MT) are biomarkers widely used in environmental monitoring programs for aquatic creatures. At the light of our results i.e. no death until the end of the experiment (144hpf), no malformation and expression of the biomarkers mentioned, it is evident that zebrafish larvae with their natural detoxification pathways are able to resist the presence of toxic substances and then they can tolerate the presence of metal concentrations. However, an excessive oxidative state can compromise cell function, therefore the uncontrolled release of nanoparticles into the environment is severe and must be constantly monitored.

Keywords: nanoparticles, embryo zebrafish, HSP70, PARP-1

Procedia PDF Downloads 117
345 The Effect of Vibration Amplitude on Tissue Temperature and Lesion Size When Using a Vibrating Cardiac Catheter

Authors: Kaihong Yu, Tetsui Yamashita, Shigeaki Shingyochi, Kazuo Matsumoto, Makoto Ohta

Abstract:

During cardiac ablation, high power delivery for deeper lesion formation is limited by electrode-tissue interface overheating which can cause serious complications such as thrombus. To prevent this overheating, temperature control and open irrigation are often used. In temperature control, radiofrequency generator is adjusted to deliver the maximum output power, which maintains the electrode temperature at a target temperature (commonly 55°C or 60°C). Then the electrode-tissue interface temperature is also limited. The electrode temperature is a result of heating from the contacted tissue and cooling from the surrounding blood. Because the cooling from blood is decreased under conditions of low blood flow, the generator needs to decrease the output power. Thus, temperature control cannot deliver high power under conditions of low blood flow. In open irrigation, saline in room temperature is flushed through the holes arranged in the electrode. The electrode-tissue interface is cooled by the sufficient environmental cooling. And high power delivery can also be done under conditions of low blood flow. However, a large amount of saline infusions (approximately 1500 ml) during irrigation can cause other serious complication. When open irrigation cannot be used under conditions of low blood flow, a new overheating prevention may be required. The authors have proposed a new electrode cooling method by making the catheter vibrating. The previous work has introduced that the vibration can make a cooling effect on electrode, which may result form that the vibration could increase the flow velocity around the catheter. The previous work has also proved that increasing vibration frequency can increase the cooling by vibration. However, the effect of the vibration amplitude is still unknown. Thus, the present study investigated the effect of vibration amplitude on tissue temperature and lesion size. An agar phantom model was used as a tissue-equivalent material for measuring tissue temperature. Thermocouples were inserted into the agar to measure the internal temperature. Porcine myocardium was used for lesion size measurement. A normal ablation catheter was set perpendicular to the tissue (agar or porcine myocardium) with 10 gf contact force in 37°C saline without flow. Vibration amplitude of ± 0.5, ± 0.75, and ± 1.0 mm with a constant frequency (31 Hz or 63) was used. A temperature control protocol (45°C for agar phantom, 60°C for porcine myocardium) was used for the radiofrequency applications. The larger amplitude shows the larger lesion sizes. And the higher tissue temperatures in agar phantom are also shown with the higher amplitude. With a same frequency, the larger amplitude has the higher vibrating speed. And the higher vibrating speed will increase the flow velocity around the electrode more, which leads to a larger electrode temperature decrease. To maintain the electrode at the target temperature, ablator has to increase the output power. With the higher output power in the same duration, the released energy also increases. Consequently, the tissue temperature will be increased and lead to larger lesion sizes.

Keywords: cardiac ablation, electrode cooling, lesion size, tissue temperature

Procedia PDF Downloads 353
344 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space

Authors: V. Jayanthi

Abstract:

Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.

Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids

Procedia PDF Downloads 231
343 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions

Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita

Abstract:

Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.

Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly

Procedia PDF Downloads 221
342 Thermo-Economic Evaluation of Sustainable Biogas Upgrading via Solid-Oxide Electrolysis

Authors: Ligang Wang, Theodoros Damartzis, Stefan Diethelm, Jan Van Herle, François Marechal

Abstract:

Biogas production from anaerobic digestion of organic sludge from wastewater treatment as well as various urban and agricultural organic wastes is of great significance to achieve a sustainable society. Two upgrading approaches for cleaned biogas can be considered: (1) direct H₂ injection for catalytic CO₂ methanation and (2) CO₂ separation from biogas. The first approach usually employs electrolysis technologies to generate hydrogen and increases the biogas production rate; while the second one usually applies commercially-available highly-selective membrane technologies to efficiently extract CO₂ from the biogas with the latter being then sent afterward for compression and storage for further use. A straightforward way of utilizing the captured CO₂ is on-site catalytic CO₂ methanation. From the perspective of system complexity, the second approach may be questioned, since it introduces an additional expensive membrane component for producing the same amount of methane. However, given the circumstance that the sustainability of the produced biogas should be retained after biogas upgrading, renewable electricity should be supplied to drive the electrolyzer. Therefore, considering the intermittent nature and seasonal variation of renewable electricity supply, the second approach offers high operational flexibility. This indicates that these two approaches should be compared based on the availability and scale of the local renewable power supply and not only the technical systems themselves. Solid-oxide electrolysis generally offers high overall system efficiency, and more importantly, it can achieve simultaneous electrolysis of CO₂ and H₂O (namely, co-electrolysis), which may bring significant benefits for the case of CO₂ separation from the produced biogas. When taking co-electrolysis into account, two additional upgrading approaches can be proposed: (1) direct steam injection into the biogas with the mixture going through the SOE, and (2) CO₂ separation from biogas which can be used later for co-electrolysis. The case study of integrating SOE to a wastewater treatment plant is investigated with wind power as the renewable power. The dynamic production of biogas is provided on an hourly basis with the corresponding oxygen and heating requirements. All four approaches mentioned above are investigated and compared thermo-economically: (a) steam-electrolysis with grid power, as the base case for steam electrolysis, (b) CO₂ separation and co-electrolysis with grid power, as the base case for co-electrolysis, (c) steam-electrolysis and CO₂ separation (and storage) with wind power, and (d) co-electrolysis and CO₂ separation (and storage) with wind power. The influence of the scale of wind power supply is investigated by a sensitivity analysis. The results derived provide general understanding on the economic competitiveness of SOE for sustainable biogas upgrading, thus assisting the decision making for biogas production sites. The research leading to the presented work is funded by European Union’s Horizon 2020 under grant agreements n° 699892 (ECo, topic H2020-JTI-FCH-2015-1) and SCCER BIOSWEET.

Keywords: biogas upgrading, solid-oxide electrolyzer, co-electrolysis, CO₂ utilization, energy storage

Procedia PDF Downloads 134
341 Absenteeism in Polytechnical University Studies: Quantification and Identification of the Causes at Universitat Politècnica de Catalunya

Authors: E. Mas de les Valls, M. Castells-Sanabra, R. Capdevila, N. Pla, Rosa M. Fernandez-Canti, V. de Medina, A. Mujal, C. Barahona, E. Velo, M. Vigo, M. A. Santos, T. Soto

Abstract:

Absenteeism in universities, including polytechnical universities, is influenced by a variety of factors. Some factors overlap with those causing absenteeism in schools, while others are specific to the university and work-related environments. Indeed, these factors may stem from various sources, including students, educators, the institution itself, or even the alignment of degree curricula with professional requirements. In Spain, there has been an increase in absenteeism in polytechnical university studies, especially after the Covid crisis, posing a significant challenge for institutions to address. This study focuses on Universitat Politècnica de Catalunya• BarcelonaTech (UPC) and aims to quantify the current level of absenteeism and identify its main causes. The study is part of the teaching innovation project ASAP-UPC, which aims to minimize absenteeism through the redesign of teaching methodologies. By understanding the factors contributing to absenteeism, the study seeks to inform the subsequent phases of the ASAP-UPC project, which involve implementing methodologies to minimize absenteeism and evaluating their effectiveness. The study utilizes surveys conducted among students and polytechnical companies. Students' perspectives are gathered through both online surveys and in-person interviews. The surveys inquire about students' interest in attending classes, skill development throughout their UPC experience, and their perception of the skills required for a career in a polytechnical field. Additionally, polytechnical companies are surveyed regarding the skills they seek in prospective employees. The collected data is then analyzed to identify patterns and trends. This analysis involves organizing and categorizing the data, identifying common themes, and drawing conclusions based on the findings. This mixed-method approach has revealed that higher levels of absenteeism are observed in large student groups at both the Bachelor's and Master's degree levels. However, the main causes of absenteeism differ between these two levels. At the Bachelor's level, many students express dissatisfaction with in-person classes, perceiving them as overly theoretical and lacking a balance between theory, experimental practice, and problem-solving components. They also find a lack of relevance to professional needs. Consequently, they resort to using online available materials developed during the Covid crisis and attending private academies for exam preparation instead. On the other hand, at the Master's level, absenteeism primarily arises from schedule incompatibility between university and professional work. There is a discrepancy between the skills highly valued by companies and the skills emphasized during the studies, aligning partially with students' perceptions. These findings are of theoretical importance as they shed light on areas that can be improved to offer a more beneficial educational experience to students at UPC. The study also has potential applicability to other polytechnic universities, allowing them to adapt the surveys and apply the findings to their specific contexts. By addressing the identified causes of absenteeism, universities can enhance the educational experience and better prepare students for successful careers in polytechnical fields.

Keywords: absenteeism, polytechnical studies, professional skills, university challenges

Procedia PDF Downloads 46
340 A Simulation Study of Direct Injection Compressed Natural Gas Spark Ignition Engine Performance Utilizing Turbulent Jet Ignition with Controlled Air Charge

Authors: Siyamak Ziyaei, Siti Khalijah Mazlan, Petros Lappas

Abstract:

Compressed Natural Gas (CNG) mainly consists of Methane CH₄ and has a low carbon to hydrogen ratio relative to other hydrocarbons. As a result, it has the potential to reduce CO₂ emissions by more than 20% relative to conventional fuels like diesel or gasoline Although Natural Gas (NG) has environmental advantages compared to other hydrocarbon fuels whether they are gaseous or liquid, its main component, CH₄, burns at a slower rate than conventional fuels A higher pressure and a leaner cylinder environment will overemphasize slow burn characteristic of CH₄. Lean combustion and high compression ratios are well-known methods for increasing the efficiency of internal combustion engines. In order to achieve successful CNG lean combustion in Spark Ignition (SI) engines, a strong ignition system is essential to avoid engine misfires, especially in ultra-lean conditions. Turbulent Jet Ignition (TJI) is an ignition system that employs a pre-combustion chamber to ignite the lean fuel mixture in the main combustion chamber using a fraction of the total fuel per cycle. TJI enables ultra-lean combustion by providing distributed ignition sites through orifices. The fast burn rate provided by TJI enables the ordinary SI engine to be comparable to other combustion systems such as Homogeneous Charge Compression Ignition (HCCI) or Controlled Auto-Ignition (CAI) in terms of thermal efficiency, through the increased levels of dilution without the need of sophisticated control systems. Due to the physical geometry of TJIs, which contain small orifices that connect the prechamber to the main chamber, scavenging is one of the main factors that reduce TJI performance. Specifically, providing the right mixture of fuel and air has been identified as a key challenge. The reason for this is the insufficient amount of air that is pushed into the pre-chamber during each compression stroke. There is also the problem that combustion residual gases such as CO₂, CO and NOx from the previous combustion cycle dilute the pre- chamber fuel-air mixture preventing rapid combustion in the pre-chamber. An air-controlled active TJI is presented in this paper in order to address these issues. By applying air to the pre-chamber at a sufficient pressure, residual gases are exhausted, and the air-fuel ratio is controlled within the pre-chamber, thereby improving the quality of combustion. This paper investigates the 3D-simulated combustion characteristics of a Direct Injected (DI-CNG) fuelled SI en- gine with a pre-chamber equipped with an air channel by using AVL FIRE software. Experiments and simulations were performed at the Worldwide Mapping Point (WWMP) at 1500 Revolutions Per Minute (RPM), 3.3 bar Indicated Mean Effective Pressure (IMEP), using only conventional spark plugs as the baseline. After validating simulation data, baseline engine conditions were set for all simulation scenarios at λ=1. Following that, the pre-chambers with and without an auxiliary fuel supply were simulated. In the simulated (DI-CNG) SI engine, active TJI was observed to perform better than passive TJI and spark plug. In conclusion, the active pre-chamber with an air channel demon-strated an improved thermal efficiency (ηth) over other counterparts and conventional spark ignition systems.

Keywords: turbulent jet ignition, active air control turbulent jet ignition, pre-chamber ignition system, active and passive pre-chamber, thermal efficiency, methane combustion, internal combustion engine combustion emissions

Procedia PDF Downloads 69
339 Regional Rates of Sand Supply to the New South Wales Coast: Southeastern Australia

Authors: Marta Ribo, Ian D. Goodwin, Thomas Mortlock, Phil O’Brien

Abstract:

Coastal behavior is best investigated using a sediment budget approach, based on the identification of sediment sources and sinks. Grain size distribution over the New South Wales (NSW) continental shelf has been widely characterized since the 1970’s. Coarser sediment has generally accumulated on the outer shelf, and/or nearshore zones, with the latter related to the presence of nearshore reef and bedrocks. The central part of the NSW shelf is characterized by the presence of fine sediments distributed parallel to the coastline. This study presents new grain size distribution maps along the NSW continental shelf, built using all available NSW and Commonwealth Government holdings. All available seabed bathymetric data form prior projects, single and multibeam sonar, and aerial LiDAR surveys were integrated into a single bathymetric surface for the NSW continental shelf. Grain size information was extracted from the sediment sample data collected in more than 30 studies. The information extracted from the sediment collections varied between reports. Thus, given the inconsistency of the grain size data, a common grain size classification was her defined using the phi scale. The new sediment distribution maps produced, together with new detailed seabed bathymetric data enabled us to revise the delineation of sediment compartments to more accurately reflect the true nature of sediment movement on the inner shelf and nearshore. Accordingly, nine primary mega coastal compartments were delineated along the NSW coast and shelf. The sediment compartments are bounded by prominent nearshore headlands and reefs, and major river and estuarine inlets that act as sediment sources and/or sinks. The new sediment grain size distribution was used as an input in the morphological modelling to quantify the sediment transport patterns (and indicative rates of transport), used to investigate sand supply rates and processes from the lower shoreface to the NSW coast. The rate of sand supply to the NSW coast from deep water is a major uncertainty in projecting future coastal response to sea-level rise. Offshore transport of sand is generally expected as beaches respond to rising sea levels but an onshore supply from the lower shoreface has the potential to offset some of the impacts of sea-level rise, such as coastline recession. Sediment exchange between the lower shoreface and sub-aerial beach has been modelled across the south, central, mid-north and far-north coast of NSW. Our model approach is that high-energy storm events are the primary agents of sand transport in deep water, while non-storm conditions are responsible for re-distributing sand within the beach and surf zone.

Keywords: New South Wales coast, off-shore transport, sand supply, sediment distribution maps

Procedia PDF Downloads 210
338 Eco-Nanofiltration Membranes: Nanofiltration Membrane Technology Utilization-Based Fiber Pineapple Leaves Waste as Solutions for Industrial Rubber Liquid Waste Processing and Fertilizer Crisis in Indonesia

Authors: Andi Setiawan, Annisa Ulfah Pristya

Abstract:

Indonesian rubber plant area reached 2.9 million hectares with productivity reached 1.38 million. High rubber productivity is directly proportional to the amount of waste produced rubber processing industry. Rubber industry would produce a negative impact on the rubber industry in the form of environmental pollution caused by waste that has not been treated optimally. Rubber industrial wastewater containing high-nitrogen compounds (nitrate and ammonia) and phosphate compounds which cause water pollution and odor problems due to the high ammonia content. On the other hand, demand for NPK fertilizers in Indonesia continues to increase from year to year and in need of ammonia and phosphate as raw material. Based on domestic demand, it takes a year to 400,000 tons of ammonia and Indonesia imports 200,000 tons of ammonia per year valued at IDR 4.2 trillion. As well, the lack of phosphoric acid to be imported from Jordan, Morocco, South Africa, the Philippines, and India as many as 225 thousand tons per year. During this time, the process of wastewater treatment is generally done with a rubber on the tank to contain the waste and then precipitated, filtered and the rest released into the environment. However, this method is inefficient and thus require high energy costs because through many stages before producing clean water that can be discharged into the river. On the other hand, Indonesia has the potential of pineapple fruit can be harvested throughout the year in all of Indonesia. In 2010, production reached 1,406,445 tons of pineapple in Indonesia or about 9.36 percent of the total fruit production in Indonesia. Increased productivity is directly proportional to the amount of pineapple waste pineapple leaves are kept continuous and usually just dumped in the ground or disposed of with other waste at the final disposal. Through Eco-Nanofiltration Membrane-Based Fiber Pineapple leaves Waste so that environmental problems can be solved efficiently. Nanofiltration is a process that uses pressure as a driving force that can be either convection or diffusion of each molecule. Nanofiltration membranes that can split water to nano size so as to separate the waste processed residual economic value that N and P were higher as a raw material for the manufacture of NPK fertilizer to overcome the crisis in Indonesia. The raw materials were used to manufacture Eco-Nanofiltration Membrane is cellulose from pineapple fiber which processed into cellulose acetate which is biodegradable and only requires a change of the membrane every 6 months. Expected output target is Green eco-technology so with nanofiltration membranes not only treat waste rubber industry in an effective, efficient and environmentally friendly but also lowers the cost of waste treatment compared to conventional methods.

Keywords: biodegradable, cellulose diacetate, fertilizers, pineapple, rubber

Procedia PDF Downloads 424
337 How Can Food Retailing Benefit from Neuromarketing Research: The Influence of Traditional and Innovative Tools of In-Store Communication on Consumer Reactions

Authors: Jakub Berčík, Elena Horská, Ľudmila Nagyová

Abstract:

Nowadays, the point of sale remains one of the few channels of communication which is not oversaturated yet and has great potential for the future. The fact that purchasing decisions are significantly affected by emotions, while up to 75 % of them are implemented at the point of sale, only demonstrates its importance. The share of impulsive purchases is about 60-75 %, depending on the particular product category. Nevertheless, habits predetermine the content of the shopping cart above all and hence in this regard the role of in-store communication is to disrupt the routine and compel the customer to try something new. This is the reason why it is essential to know how to work with this relatively young branch of marketing communication as efficiently as possible. New global trend in this discipline is evaluating the effectiveness of particular tools in the in-store communication. To increase the efficiency it is necessary to become familiar with the factors affecting the customer both consciously and unconsciously, and that is a task for neuromarketing and sensory marketing. It is generally known that the customer remembers the negative experience much longer and more intensely than the positive ones, therefore it is essential for marketers to avoid this negative experience. The final effect of POP (Point of Purchase) or POS (Point of Sale) tools is conditional not only on their quality and design, but also on the location at the point of sale which contributes to the overall positive atmosphere in the store. Therefore, in-store advertising is increasingly in the center of attention and companies are willing to spend even a third of their marketing communication budget on it. The paper deals with a comprehensive, interdisciplinary research of the impact of traditional as well as innovative tools of in-store communication on the attention and emotional state (valence and arousal) of consumers on the food market. The research integrates measurements with eye camera (Eye tracker) and electroencephalograph (EEG) in real grocery stores as well as in laboratory conditions with the purpose of recognizing attention and emotional response among respondents under the influence of selected tools of in-store communication. The object of the research includes traditional (e.g. wobblers, stoppers, floor graphics) and innovative (e.g. displays, wobblers with LED elements, interactive floor graphics) tools of in-store communication in the fresh unpackaged food segment. By using a mobile 16-channel electroencephalograph (EEG equipment) from the company EPOC, a mobile eye camera (Eye tracker) from the company Tobii and a stationary eye camera (Eye tracker) from the company Gazepoint, we observe the attention and emotional state (valence and arousal) to reveal true consumer preferences using traditional and new unusual communication tools at the point of sale of the selected foodstuffs. The paper concludes with suggesting possibilities for rational, effective and energy-efficient combination of in-store communication tools, by which the retailer can accomplish not only captivating and attractive presentation of displayed goods, but ultimately also an increase in retail sales of the store.

Keywords: electroencephalograph (EEG), emotion, eye tracker, in-store communication

Procedia PDF Downloads 371
336 An Exploratory Case Study of Pre-Service Teachers' Learning to Teach Mathematics to Culturally Diverse Students through a Community-Based After-School Field Experience

Authors: Eugenia Vomvoridi-Ivanovic

Abstract:

It is broadly assumed that participation in field experiences will help pre-service teachers (PSTs) bridge theory to practice. However, this is often not the case since PSTs who are placed in classrooms with large numbers of students from diverse linguistic, cultural, racial, and ethnic backgrounds (culturally diverse students (CDS)) usually observe ineffective mathematics teaching practices that are in contrast to those discussed in their teacher preparation program. Over the past decades, the educational research community has paid increasing attention to investigating out-of-school learning contexts and how participation in such contexts can contribute to the achievement of underrepresented groups in Science, Technology, Engineering, and mathematics (STEM) education and their expanded participation in STEM fields. In addition, several research studies have shown that students display different kinds of mathematical behaviors and discourse practices in out-of-school contexts than they do in the typical mathematics classroom since they draw from a variety of linguistic and cultural resources to negotiate meanings and participate in joint problem solving. However, almost no attention has been given to exploring these contexts as field experiences for pre-service mathematics teachers. The purpose of this study was to explore how participation in a community based after-school field experience promotes understanding of the content pedagogy concepts introduced in elementary mathematics methods courses, particularly as they apply to teaching mathematics to CDS. This study draws upon a situated, socio-cultural theory of teacher learning that centers on the concept of learning as situated social practice, which includes discourse, social interaction, and participation structures. Consistent with exploratory case study methodology, qualitative methods were employed to investigate how a cohort of twelve participating pre-service teacher's approach to pedagogy and their conversations around teaching and learning mathematics to CDS evolved through their participation in the after-school field experience, and how they connected the content discussed in their mathematics methods course with their interactions with the CDS in the after-school. Data were collected over a period of one academic year from the following sources: (a) audio recordings of the PSTs' interactions with the students during the after-school sessions, (b) PSTs' after-school field-notes, (c) audio-recordings of weekly methods course meetings, and (d) other document data (e.g., PST and student generated artifacts, PSTs' written course assignments). The findings of this study reveal that the PSTs benefitted greatly through their participation in the after-school field experience. Specifically, after-school participation promoted a deeper understanding of the content pedagogy concepts introduced in the mathematics methods course and gained a greater appreciation for how students learn mathematics with understanding. Further, even though many of PSTs' assumptions about the mathematical abilities of CDS were challenged and PSTs began to view CDSs' cultural and linguistic backgrounds as resources (rather than obstacles) for learning, some PSTs still held negative stereotypes about CDS and teaching and learning mathematics to CDS in particular. Insights gained through this study contribute to a better understanding of how informal mathematics learning contexts may provide a valuable context for pre-service teacher's learning to teach mathematics to CDS.

Keywords: after-school mathematics program, pre-service mathematical education of teachers, qualitative methods, situated socio-cultural theory, teaching culturally diverse students

Procedia PDF Downloads 111
335 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 62
334 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors

Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy

Abstract:

The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.

Keywords: hyperkalemia, drift, AACVD, organosilicon

Procedia PDF Downloads 105
333 Assessment Environmental and Economic of Yerba Mate as a Feed Additive on Feedlot Lamb

Authors: Danny Alexander R. Moreno, Gustavo L. Sartorello, Yuli Andrea P. Bermudez, Richard R. Lobo, Ives Claudio S. Bueno, Augusto H. Gameiro

Abstract:

Meat production is a significant sector for Brazil's economy; however, the agricultural segment has suffered censure regarding the negative impacts on the environment, which consequently results in climate change. Therefore, it is essential the implementation of nutritional strategies that can improve the environmental performance of livestock. This research aimed to estimate the environmental impact and profitability of the use of yerba mate extract (Ilex paraguariensis) as an additive in the feeding of feedlot lamb. Thirty-six castrated male lambs (average weight of 23.90 ± 3.67 kg and average age of 75 days) were randomly assigned to four experimental diets with different levels of inclusion of yerba mate extract (0, 1, 2, and 4 %) based on dry matter. The animals were confined for fifty-three days and fed with 60:40 corn silage to concentrate ratio. As an indicator of environmental impact, the carbon footprint (CF) was measured as kg of CO₂ equivalent (CO₂-eq) per kg of body weight produced (BWP). The greenhouse gas (GHG) emissions such as methane (CH₄) generated from enteric fermentation, were calculated using the sulfur hexafluoride gas tracer (SF₆) technique; while the CH₄, nitrous oxide (N₂O - emissions generated by feces and urine), and carbon dioxide (CO₂ - emissions generated by concentrate and silage processing) were estimated using the Intergovernmental Panel on Climate Change (IPCC) methodology. To estimate profitability, the gross margin was used, which is the total revenue minus the total cost; the latter is composed of the purchase of animals and food. The boundaries of this study considered only the lamb fattening system. The enteric CH₄ emission from the lamb was the largest source of on-farm GHG emissions (47%-50%), followed by CH₄ and N₂O emissions from manure (10%-20%) and CO₂ emission from the concentrate, silage, and fossil energy (17%-5%). The treatment that generated the least environmental impact was the group with 4% of yerba mate extract (YME), which showed a 3% reduction in total GHG emissions in relation to the control (1462.5 and 1505.5 kg CO₂-eq, respectively). However, the scenario with 1% YME showed an increase in emissions of 7% compared to the control group. In relation to CF, the treatment with 4% YME had the lowest value (4.1 kg CO₂-eq/kg LW) compared with the other groups. Nevertheless, although the 4% YME inclusion scenario showed the lowest CF, the gross margin decreased by 36% compared to the control group (0% YME), due to the cost of YME as a food additive. The results showed that the extract has the potential for use in reducing GHG. However, the cost of implementing this input as a mitigation strategy increased the production cost. Therefore, it is important to develop political strategies that help reduce the acquisition costs of input that contribute to the search for the environmental and economic benefit of the livestock sector.

Keywords: meat production, natural additives, profitability, sheep

Procedia PDF Downloads 107
332 Green Extraction Technologies of Flavonoids Containing Pharmaceuticals

Authors: Lamzira Ebralidze, Aleksandre Tsertsvadze, Dali Berashvili, Aliosha Bakuridze

Abstract:

Nowadays, there is an increasing demand for biologically active substances from vegetable, animal, and mineral resources. In terms of the use of natural compounds, pharmaceutical, cosmetic, and nutrition industry has big interest. The biggest drawback of conventional extraction methods is the need to use a large volume of organic extragents. The removal of the organic solvent is a multi-stage process. And their absolute removal cannot be achieved, and they still appear in the final product as impurities. A large amount of waste containing organic solvent damages not only human health but also has the harmful effects of the environment. Accordingly, researchers are focused on improving the extraction methods, which aims to minimize the use of organic solvents and energy sources, using alternate solvents and renewable raw materials. In this context, green extraction principles were formed. Green Extraction is a need of today’s environment. Green Extraction is the concept, and it totally corresponds to the challenges of the 21st century. The extraction of biologically active compounds based on green extraction principles is vital from the view of preservation and maintaining biodiversity. Novel technologies of green extraction are known, such as "cold methods" because during the extraction process, the temperature is relatively lower, and it doesn’t have a negative impact on the stability of plant compounds. Novel technologies provide great opportunities to reduce or replace the use of organic toxic solvents, the efficiency of the process, enhance excretion yield, and improve the quality of the final product. The objective of the research is the development of green technologies of flavonoids containing preparations. Methodology: At the first stage of the research, flavonoids containing preparations (Tincture Herba Leonuri, flamine, rutine) were prepared based on conventional extraction methods: maceration, bismaceration, percolation, repercolation. At the same time, the same preparations were prepared based on green technologies, microwave-assisted, UV extraction methods. Product quality characteristics were evaluated by pharmacopeia methods. At the next stage of the research technological - economic characteristics and cost efficiency of products prepared based on conventional and novel technologies were determined. For the extraction of flavonoids, water is used as extragent. Surface-active substances are used as co-solvent in order to reduce surface tension, which significantly increases the solubility of polyphenols in water. Different concentrations of water-glycerol mixture, cyclodextrin, ionic solvent were used for the extraction process. In vitro antioxidant activity will be studied by the spectrophotometric method, using DPPH (2,2-diphenyl-1- picrylhydrazyl) as an antioxidant assay. The advantage of green extraction methods is also the possibility of obtaining higher yield in case of low temperature, limitation extraction process of undesirable compounds. That is especially important for the extraction of thermosensitive compounds and maintaining their stability.

Keywords: extraction, green technologies, natural resources, flavonoids

Procedia PDF Downloads 109
331 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 403
330 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 244
329 Conceptual Design of Gravity Anchor Focusing on Anchor Towing and Lowering

Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg

Abstract:

Wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis from the oil and gas industry. The unfolding of Universal heavyweight gravity anchor (UGA) for floating based foundation for floating Tension Leg Platform (TLP) sub-structures is developed in this research work. It is funded by the German Federal Ministry of Education and Research) for a three-year (2019-2022) research program called “Offshore Wind Solutions Plus (OWSplus) - Floating Offshore Wind Solutions Mecklenburg-Vorpommern.” It’s a group consists of German institutions (Universities, laboratories, and consulting companies). The part of the project is focused on the numerical modeling of gravity anchor that involves to analyze and solve fluid flow problems. Compared to gravity-based torpedo anchors, these UGA will be towed and lowered via controlled machines (tug boats) at lower speeds. This kind of installation of UGA are new to the offshore wind industry, particularly for TLP, and very few research works have been carried out in recent years. Conventional methods for transporting the anchor requires a large transportation crane vessel which involves a greater cost. This conceptual UGA anchors consists of ballasting chambers which utilizes the concept of buoyancy forces; the inside chambers are filled with the required amount of water in a way that they can float on the water for towing. After reaching the installation site, those chambers are ballasted with water for lowering. After it’s lifetime, these UGA can be unballasted (for erection or replacement) results in self-rising to the sea surface; buoyancy chambers give an advantage for using an UGA without the need of heavy machinery. However, while lowering/rising the UGA towards/away from the seabed, it experiences difficult, harsh marine environments due to the interaction of waves and currents. This leads to drifting of the anchor from the desired installation position and damage to the lowering machines. To overcome such harsh environments problems, a numerical model is built to investigate the influences of different outer contours and other fluid governing shapes that can be installed on the UGA to overcome the turbulence and drifting. The presentation will highlight the importance of the Computational Fluid Dynamics (CFD) numerical model in OpenFOAM, which is open-source programming software.

Keywords: anchor lowering, towing, waves, currrents, computational fluid dynamics

Procedia PDF Downloads 147
328 Quantifying Impairments in Whiplash-Associated Disorders and Association with Patient-Reported Outcomes

Authors: Harpa Ragnarsdóttir, Magnús Kjartan Gíslason, Kristín Briem, Guðný Lilja Oddsdóttir

Abstract:

Introduction: Whiplash-Associated Disorder (WAD) is a health problem characterized by motor, neurological and psychosocial symptoms, stressing the need for a multimodal treatment approach. To achieve individualized multimodal approach, prognostic factors need to be identified early using validated patient-reported and objective outcome measures. The aim of this study is to demonstrate the degree of association between patient-reported and clinical outcome measures of WAD patients in the subacute phase. Methods: Individuals (n=41) with subacute (≥1, ≤3 months) WAD (I-II), medium to high-risk symptoms, or neck pain rating ≥ 4/10 on the Visual Analog Scale (VAS) were examined. Outcome measures included measurements for movement control (Butterfly test) and cervical active range of motion (cAROM) using the NeckSmart system, a computer system using an inertial measurement unit (IMU) that connects to a computer. The IMU sensor is placed on the participant’s head, who receives visual feedback about the movement of the head. Patient-reported neck disability, pain intensity, general health, self-perceived handicap, central sensitization, and difficulties due to dizziness were measured using questionnaires. Excel and R statistical software were used for statistical analyses. Results: Forty-one participants, 15 males (37%), 26 females (63%), mean (SD) age 36.8 (±12.7), underwent data collection. Mean amplitude accuracy (AA) (SD) in the Butterfly test for easy, medium, and difficult paths were 2.4mm (0.9), 4.4mm (1.8), and 6.8mm (2.7), respectively. Mean cAROM (SD) for flexion, extension, left-, and right rotation were 46.3° (18.5), 48.8° (17.8), 58.2° (14.3), and 58.9° (15.0), respectively. Mean scores on the Neck Disability Index (NDI), VAS, Dizziness Handicap Inventory (DHI), Central Sensitization Inventory (CSI), and 36-Item Short Form Survey RAND version (RAND) were 43% (17.4), 7 (1.7), 37 (25.4), 51 (17.5), and 39.2 (17.7) respectively. Females showed significantly greater deviation for AA compared to males for easy and medium Butterfly paths (p<0.05). Statistically significant moderate to strong positive correlation was found between the DHI and easy (r=0.6, p=0.05), medium (r=0.5, p=0.05)) and difficult (r=0.5, p<0.05) Butterfly paths, between the total RAND score and all cAROMs (r between 0.4-0.7, p≤0.05) except flexion (r=0.4, p=0.7), and between the NDI score and CSI (r=0.7, p<0.01), VAS (r=0.5, p<0.01), and DHI (r=0.7, p<0.01) scores respectively. Discussion: All patient-reported and objective measures were found to be outside the reference range. Results suggest females have worse movement control in the neck in the subacute WAD phase. However, no statistical difference based on gender was found in patient-reported measures. Suggesting females might have worse movement control than males in general in this phase. The correlation found between DHI and the Butterfly test can be explained because the DHI measures proprioceptive symptoms like dizziness and eye movement disorders that can affect the outcome of movement control tests. A correlation was found between the total RAND score and cAROM, suggesting that a reduced range of motion affects the quality of life. Significance: The NeckSmart system can detect abnormalities in cAROM, fine movement control, and kinesthesia of the neck. Results suggest females have worse movement control than males. Results show a moderate to a high correlation between several patient-reported and objective measurements.

Keywords: whiplash associated disorders, car-collision, neck, trauma, subacute

Procedia PDF Downloads 52