Search results for: computer based environment
312 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method
Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez
Abstract:
Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics
Procedia PDF Downloads 92311 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 262310 Actinomycetes from Protected Forest Ecosystems of Assam, India: Diversity and Antagonistic Activity
Authors: Priyanka Sharma, Ranjita Das, Mohan C. Kalita, Debajit Thakur
Abstract:
Background: Actinomycetes are the richest source of novel bioactive secondary metabolites such as antibiotics, enzymes and other therapeutically useful metabolites with diverse biological activities. The present study aims at the antimicrobial potential and genetic diversity of culturable Actinomycetes isolated from protected forest ecosystems of Assam which includes Kaziranga National Park (26°30˝-26°45˝N and 93°08˝-93°36˝E), Pobitora Wildlife Sanctuary (26º12˝-26º16˝N and 91º58˝-92º05˝E) and Gibbon Wildlife Sanctuary (26˚40˝-26˚45˝N and 94˚20˝-94˚25˝E) which are located in the North-eastern part of India. Northeast India is a part of the Indo-Burma mega biodiversity hotspot and most of the protected forests of this region are still unexplored for the isolation of effective antibiotic-producing Actinomycetes. Thus, there is tremendous possibility that these virgin forests could be a potential storehouse of novel microorganisms, particularly Actinomycetes, exhibiting diverse biological properties. Methodology: Soil samples were collected from different ecological niches of the protected forest ecosystems of Assam and Actinomycetes were isolated by serial dilution spread plate technique using five selective isolation media. Preliminary screening of Actinomycetes for an antimicrobial activity was done by spot inoculation method and the secondary screening by disc diffusion method against several test pathogens, including multidrug resistant Staphylococcus aureus (MRSA). The strains were further screened for the presence of antibiotic synthetic genes such as type I polyketide synthases (PKS-I), type II polyketide synthases (PKS-II) and non-ribosomal peptide synthetases (NRPS) genes. Genetic diversity of the Actinomycetes producing antimicrobial metabolites was analyzed through 16S rDNA-RFLP using Hinf1 restriction endonuclease. Results: Based on the phenotypic characterization, a total of 172 morphologically distinct Actinomycetes were isolated and screened for antimicrobial activity by spot inoculation method on agar medium. Among the strains tested, 102 (59.3%) strains showed activity against Gram-positive bacteria, 98 (56.97%) against Gram-negative bacteria, 92 (53.48%) against Candida albicans MTCC 227 and 130 (75.58%) strains showed activity against at least one of the test pathogens. Twelve Actinomycetes exhibited broad spectrum antimicrobial activity in the secondary screening. The taxonomic identification of these twelve strains by 16S rDNA sequencing revealed that Streptomyces was found to be the predominant genus. The PKS-I, PKS-II and NRPS genes detection indicated diverse bioactive products of these twelve Actinomycetes. Genetic diversity by 16S rDNA-RFLP indicated that Streptomyces was the dominant genus amongst the antimicrobial metabolite producing Actinomycetes. Conclusion: These findings imply that Actinomycetes from the protected forest ecosystems of Assam, India, are a potential source of bioactive secondary metabolites. These areas are as yet poorly studied and represent diverse and largely unscreened ecosystem for the isolation of potent Actinomycetes producing antimicrobial secondary metabolites. Detailed characterization of the bioactive Actinomycetes as well as purification and structure elucidation of the bioactive compounds from the potent Actinomycetes is the subject of ongoing investigation. Thus, to exploit Actinomycetes from such unexplored forest ecosystems is a way to develop bioactive products.Keywords: Actinomycetes, antimicrobial activity, forest ecosystems, RFLP
Procedia PDF Downloads 391309 Baseline Data for Insecticide Resistance Monitoring in Tobacco Caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) on Cole Crops
Authors: Prabhjot Kaur, B.K. Kang, Balwinder Singh
Abstract:
The tobacco caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) is an agricultural important pest species. S. litura has a wide host range of approximately recorded 150 plant species worldwide. In Punjab, this pest attains sporadic status primarily on cauliflower, Brassica oleracea (L.). This pest destroys vegetable crop and particularly prefers the cruciferae family. However, it is also observed feeding on other crops such as arbi, Colocasia esculenta (L.), mung bean, Vigna radiata (L.), sunflower, Helianthus annuus (L.), cotton, Gossypium hirsutum (L.), castor, Ricinus communis (L.), etc. Larvae of this pest completely devour the leaves of infested plant resulting in huge crop losses which ranges from 50 to 70 per cent. Indiscriminate and continuous use of insecticides has contributed in development of insecticide resistance in insects and caused the environmental degradation as well. Moreover, a base line data regarding the toxicity of the newer insecticides would help in understanding the level of resistance developed in this pest and any possible cross-resistance there in, which could be assessed in advance. Therefore, present studies on development of resistance in S. litura against four new chemistry insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) were carried out in the Toxicology laboratory, Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India during the year 2011-12. Various stages of S. litura (eggs, larvae) were collected from four different locations (Malerkotla, Hoshiarpur, Amritsar and Samrala) of Punjab. Resistance is developed in third instars of lepidopterous pests. Therefore, larval bioassays were conducted to estimate the response of field populations of thirty third-instar larvae of S. litura under laboratory conditions at 25±2°C and 65±5 per cent relative humidity. Leaf dip bioassay technique with diluted insecticide formulations recommended by Insecticide Resistance Action Committee (IRAC) was performed in the laboratory with seven to ten treatments depending on the insecticide class, respectively. LC50 values were estimated by probit analysis after correction to record control mortality data which was used to calculate the resistance ratios (RR). The LC50 values worked out for emamectin benzoate, chlorantraniliprole, indoxacarb, spinosad are 0.081, 0.088, 0.380, 4.00 parts per million (ppm) against pest populations collected from Malerkotla; 0.051, 0.060, 0.250, 3.00 (ppm) of Amritsar; 0.002, 0.001, 0.0076, 0.10 ppm for Samrala and 0.000014, 0.00001, 0.00056, 0.003 ppm against pest population of Hoshiarpur, respectively. The LC50 values for populations collected from these four locations were in the order Malerkotla>Amritsar>Samrala>Hoshiarpur for the insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) tested. Based on LC50 values obtained, emamectin benzoate (0.000014 ppm) was found to be the most toxic among all the tested populations, followed by chlorantraniliprole (0.00001 ppm), indoxacarb (0.00056 ppm) and spinosad (0.003 ppm), respectively. The pairwise correlation coefficients of LC50 values indicated that there was lack of cross resistance for emamectin benzoate, chlorantraniliprole, spinosad, indoxacarb in populations of S. litura from Punjab. These insecticides may prove to be promising substitutes for the effective control of insecticide resistant populations of S. litura in Punjab state, India.Keywords: Spodoptera litura, insecticides, toxicity, resistance
Procedia PDF Downloads 342308 Comparisons of Drop Jump and Countermovement Jump Performance for Male Basketball Players with and without Low-Dye Taping Application
Authors: Chung Yan Natalia Yeung, Man Kit Indy Ho, Kin Yu Stan Chan, Ho Pui Kipper Lam, Man Wah Genie Tong, Tze Chung Jim Luk
Abstract:
Excessive foot pronation is a well-known risk factor of knee and foot injuries such as patellofemoral pain, patellar and Achilles tendinopathy, and plantar fasciitis. Low-Dye taping (LDT) application is not uncommon for basketball players to control excessive foot pronation for pain control and injury prevention. The primary potential benefits of using LDT include providing additional supports to medial longitudinal arch and restricting the excessive midfoot and subtalar motion in weight-bearing activities such as running and landing. Meanwhile, restrictions provided by the rigid tape may also potentially limit functional joint movements and sports performance. Coaches and athletes need to weigh the potential benefits and harmful effects before making a decision if applying LDT technique is worthwhile or not. However, the influence of using LDT on basketball-related performance such as explosive and reactive strength is not well understood. Therefore, the purpose of this study was to investigate the change of drop jump (DJ) and countermovement jump (CMJ) performance before and after LDT application for collegiate male basketball players. In this within-subject crossover study, 12 healthy male basketball players (age: 21.7 ± 2.5 years) with at least 3-year regular basketball training experience were recruited. Navicular drop (ND) test was adopted as the screening and only those with excessive pronation (ND ≥ 10mm) were included. Participants with recent lower limb injury history were excluded. Recruited subjects were required to perform both ND, DJ (on a platform of 40cm height) and CMJ (without arms swing) tests in series during taped and non-taped conditions in the counterbalanced order. Reactive strength index (RSI) was calculated by using the flight time divided by the ground contact time measured. For DJ and CMJ tests, the best of three trials was used for analysis. The difference between taped and non-taped conditions for each test was further calculated through standardized effect ± 90% confidence intervals (CI) with clinical magnitude-based inference (MBI). Paired samples T-test showed significant decrease in ND (-4.68 ± 1.44mm; 95% CI: -3.77, -5.60; p < 0.05) while MBI demonstrated most likely beneficial and large effect (standardize effect: -1.59 ± 0.27) in LDT condition. For DJ test, significant increase in both flight time (25.25 ± 29.96ms; 95% CI: 6.22, 44.28; p < 0.05) and RSI (0.22 ± 0.22; 95% CI: 0.08, 0.36; p < 0.05) were observed. In taped condition, MBI showed very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.49) in flight time, possibly beneficial and small effect (standardized effect: -0.26 ± 0.29) in ground contact time and very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.42) in RSI. No significant difference in CMJ was observed (95% CI: -2.73, 2.08; p > 0.05). For basketball players with pes planus, applying LDT could substantially support the foot by elevating the navicular height and potentially provide acute beneficial effects in reactive strength performance. Meanwhile, no significant harmful effect on CMJ was observed. Basketball players may consider applying LDT before the game or training to enhance the reactive strength performance. However since the observed effects in this study could not generalize to other players without excessive foot pronation, further studies on players with normal foot arch or navicular height are recommended.Keywords: flight time, pes planus, pronated foot, reactive strength index
Procedia PDF Downloads 155307 The Study of Fine and Nanoscale Gold in the Ores of Primary Deposits and Gold-Bearing Placers of Kazakhstan
Authors: Omarova Gulnara, Assubayeva Saltanat, Tugambay Symbat, Bulegenov Kanat
Abstract:
The article discusses the problem of developing a methodology for studying thin and nanoscale gold in ores and placers of primary deposits, which will allow us to develop schemes for revealing dispersed gold inclusions and thus improve its recovery rate to increase the gold reserves of the Republic of Kazakhstan. The type of studied gold, is characterized by a number of features. In connection with this, the conditions of its concentration and distribution in ore bodies and formations, as well as the possibility of reliably determining it by "traditional" methods, differ significantly from that of fine gold (less than 0.25 microns) and even more so from that of larger grains. The mineral composition of rocks (metasomatites) and gold ore and the mineralization associated with them were studied in detail on the Kalba ore field in Kazakhstan. Mineralized zones were identified, and samples were taken from them for analytical studies. The research revealed paragenetic relationships of newly formed mineral formations at the nanoscale, which makes it possible to clarify the conditions for the formation of deposits with a particular type of mineralization. This will provide significant assistance in developing a scheme for study. Typomorphic features of gold were revealed, and mechanisms of formation and aggregation of gold nanoparticles were proposed. The presence of a large number of particles isolated at the laboratory stage from concentrates of gravitational enrichment can serve as an indicator of the presence of even smaller particles in the object. Even the most advanced devices based on gravitational methods for gold concentration provide extraction of metal at a level of around 50%, while pulverized metal is extracted much worse, and gold of less than 1 micron size is extracted at only a few percent. Therefore, when particles of gold smaller than 10 microns are detected, their actual numbers may be significantly higher than expected. In particular, at the studied sites, enrichment of slurry and samples with volumes up to 1 m³ was carried out using a screw lock or separator to produce a final concentrate weighing up to several kilograms. Free gold particles were extracted from the concentrates in the laboratory using a number of processes (magnetic and electromagnetic separation, washing with bromoform in a cup to obtain an ultracontentrate, etc.) and examined under electron microscopes to investigate the nature of their surface and chemical composition. The main result of the study was the detection of gold nanoparticles located on the surface of loose metal grains. The most characteristic forms of gold secretions are individual nanoparticles and aggregates of different configurations. Sometimes, aggregates form solid dense films, deposits, and crusts, all of which are confined to the negative forms of the nano- and microrelief on the surfaces of golden. The results will provide significant knowledge about the prevalence and conditions for the distribution of fine and nanoscale gold in Kazakhstan deposits, as well as the development of methods for studying it, which will minimize losses of this type of gold during extraction. Acknowledgments: This publication has been produced within the framework of the Grant "Development of methodology for studying fine and nanoscale gold in ores of primary deposits, placers and products of their processing" (АР23485052, №235/GF24-26).Keywords: electron microscopy, microminerology, placers, thin and nanoscale gold
Procedia PDF Downloads 21306 International Solar Alliance: A Case for Indian Solar Diplomacy
Authors: Swadha Singh
Abstract:
International Solar Alliance is the foremost treaty-based global organization concerned with tapping the potential of sun-abundant nations between the Tropics of Cancer and Capricorn and enables co-operation among them. As a founding member of the International Solar Alliance, India exhibits its positioning as an upcoming leader in clean energy. India has set ambitious goals and targets to expand the share of solar in its energy mix and is playing a proactive role both at the regional and global levels. ISA aims to serve multiple goals- bring about scale commercialization of solar power, boost domestic manufacturing, and leverage solar diplomacy in African countries, amongst others. Against this backdrop, this paper attempts to examine the ways in which ISA as an intergovernmental organization under Indian leadership can leverage the cause of clean energy (solar) diplomacy and effectively shape partnerships and collaborations with other developing countries in terms of sharing solar technology, capacity building, risk mitigation, mobilizing financial investment and providing an aggregate market. A more specific focus of ISA is on the developing countries, which in the absence of a collective, are constrained by technology and capital scarcity, despite being naturally endowed with solar resources. Solar rich but finance-constrained economies face political risk, foreign exchange risk, and off-taker risk. Scholars argue that aligning India’s climate change discourse and growth prospects in its engagements, collaborations, and partnerships at the bilateral, multilateral and regional level can help promote trade, attract investments, and promote resilient energy transition both in India and in partner countries. For developing countries, coming together in an action-oriented way on issues of climate and clean energy is particularly important since it is developing and underdeveloped countries that face multiple and coalescing challenges such as the adverse impact of climate change, uneven and low access to reliable energy, and pressing employment needs. Investing in green recovery is agreed to be an assured way to create resilient value chains, create sustainable livelihoods, and help mitigate climate threats. If India is able to ‘green its growth’ process, it holds the potential to emerge as a climate leader internationally. It can use its experience in the renewable sector to guide other developing countries in balancing multiple similar objectives of development, energy security, and sustainability. The challenges underlying solar expansion in India have lessons to offer other developing countries, giving India an opportunity to assume a leadership role in solar diplomacy and expand its geopolitical influence through inter-governmental organizations such as ISA. It is noted that India has limited capacity to directly provide financial funds and support and is not a leading manufacturer of cheap solar equipment, as does China; however, India can nonetheless leverage its large domestic market to scale up the commercialization of solar power and offer insights and learnings to similarly placed abundant solar countries. The paper examines the potential of and limits placed on India’s solar diplomacy.Keywords: climate diplomacy, energy security, solar diplomacy, renewable energy
Procedia PDF Downloads 118305 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence
Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti
Abstract:
In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.Keywords: collective intelligence, data framework, destination management, smart tourism
Procedia PDF Downloads 121304 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 67303 Bio-Hub Ecosystems: Expansion of Traditional Life Cycle Analysis Metrics to Include Zero-Waste Circularity Measures
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, a new set of metrics and measurement system is needed to better quantify the environmental, social and economic impacts of circular zero-waste design. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. Lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. In particular, the forestry-based plants which have been an invaluable outlet for woody biomass surplus, forest health improvement, timber production enhancement, and especially reduction of wildfire risk. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. It proposes not only models for integration of forestry, aquaculture, and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. Typically, life cycle analyses measure environmental impacts of different industrial production stages and are not integrated with indicators of material use circularity. This concept paper proposes the further development of a new set of metrics that would illustrate not only the typical life-cycle analysis (LCA), which shows the reduction in greenhouse gas (GHG) emissions, but also the zero-waste circularity measures of mass balance of the full value chain of the raw material and energy content/caloric value. These new measures quantify key impacts in making hyper-efficient use of natural resources and eliminating waste to landfills. The project utilized traditional LCA using the GREET model where the standalone biomass energy plant case was contrasted with the integration of a jet-fuel biorefinery. The methodology was then expanded to include combinations of co-hosts that optimize the life cycle of woody biomass from tree to energy, CO₂, heat and wood ash both from an energy/caloric value and for mass balance to include reuse of waste streams which are typically landfilled. The major findings of both a formal LCA study resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. If proven as a model, the expedited roll-out of these innovative scenarios can set a new standard for circular zero-waste projects that advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable bio-economy paradigm where waste streams become valuable inputs, supporting local and rural communities in simple, sustainable ways.Keywords: bio-economy, biomass energy, financing, metrics
Procedia PDF Downloads 156302 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 498301 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing
Authors: Leonie Bradfield, Stephen Fityus, John Simmons
Abstract:
The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump
Procedia PDF Downloads 161300 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors
Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara
Abstract:
Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement
Procedia PDF Downloads 121299 Influence of Water Physicochemical Properties and Vegetation Type on the Distribution of Schistosomiasis Intermediate Host Snails in Nelson Mandela Bay
Authors: Prince S. Campbell, Janine B. Adams, Melusi Thwala, Opeoluwa Oyedele, Paula E. Melariri
Abstract:
Schistosomiasis is an infectious water-borne disease that holds substantial medical and veterinary importance and is transmitted by Schistosoma flatworms. The transmission and spread of the disease are geographically and temporally confined to water bodies (rivers, lakes, lagoons, dams, etc.) inhabited by its obligate intermediate host snails and human water contact. Human infection with the parasite occurs via skin penetration subsequent to exposure to water infested with schistosome cercariae. Environmental factors play a crucial role in the spread of the disease, as the survival of intermediate host snails is dependent on favourable conditions. These factors include physical and chemical components of water, including pH, salinity, temperature, electrical conductivity, dissolved oxygen, turbidity, water hardness, total dissolved solids, and velocity, as well as biological factors such as predator-prey interactions, competition, food availability, and the presence and density of aquatic vegetation. This study evaluated the physicochemical properties of the water bodies, vegetation type, distribution, and habitat presence of the snail intermediate host. A quantitative cross-sectional research design approach was employed in this study. Eight sampling sites were selected based on their proximity to residential areas. Snails and water physicochemical properties were collected over different seasons for 9 months. A simple dip method was used for surface water samples and measurements were done using multiparameter meters. Snails captured using a 300 µm mesh scoop net and predominant plant species were gathered and transported to experts for identification. Vegetation composition and cover were visually estimated and recorded at each sampling point. Data was analysed using R software (version 4.3.1). A total of 844 freshwater snails were collected, with Physa genera accounting for 95.9% of the snails. Bulinus and Biomphalaria snails, which serve as intermediate hosts for the disease, accounted for (0.9%) and (0.6%) respectively. Indicator macrophytes such as Eicchornia crassipes, Stuckenia pectinate, Typha capensis, and floating macroalgae were found in several water bodies. A negative and weak correlation existed between the number of snails and physicochemical properties such as electrical conductivity (r=-0.240), dissolved oxygen (r=-0.185), hardness (r=-0.210), pH (r=-0.235), salinity (r=-0.242), temperature (r=-0.273), and total dissolved solids (r=-0.236). There was no correlation between the number of snails and turbidity (r=-0.070). Moreover, there was a negative and weak correlation between snails and vegetation coverage (r=-0.127). Findings indicated that snail abundance marginally declined with rising physicochemical concentrations, and the majority of snails were located in regions with less vegetation cover. The reduction in Bulinus and Biomphalaria snail populations may also be attributed to other factors, such as competition among the snails. Snails of the Physa genus were abundant due to their noteworthy resilience in difficult environments. These snails have the potential to function as biological control agents in areas where the disease is endemic, as they outcompete other snails, including schistosomiasis intermediate host snails.Keywords: intermediate host snails, physicochemical properties, schistosomiasis, vegetation type
Procedia PDF Downloads 20298 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 182297 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 68296 Exploring the Neural Mechanisms of Communication and Cooperation in Children and Adults
Authors: Sara Mosteller, Larissa K. Samuelson, Sobanawartiny Wijeakumar, John P. Spencer
Abstract:
This study was designed to examine how humans are able to teach and learn semantic information as well as cooperate in order to jointly achieve sophisticated goals. Specifically, we are measuring individual differences in how these abilities develop from foundational building blocks in early childhood. The current study adopts a paradigm for novel noun learning developed by Samuelson, Smith, Perry, and Spencer (2011) to a hyperscanning paradigm [Cui, Bryant and Reiss, 2012]. This project measures coordinated brain activity between a parent and child using simultaneous functional near infrared spectroscopy (fNIRS) in pairs of 2.5, 3.5 and 4.5-year-old children and their parents. We are also separately testing pairs of adult friends. Children and parents, or adult friends, are seated across from one another at a table. The parent (in the developmental study) then teaches their child the names of novel toys. An experimenter then tests the child by presenting the objects in pairs and asking the child to retrieve one object by name. Children are asked to choose from both pairs of familiar objects and pairs of novel objects. In order to explore individual differences in cooperation with the same participants, each dyad plays a cooperative game of Jenga, in which their joint score is based on how many blocks they can remove from the tower as a team. A preliminary analysis of the noun-learning task showed that, when presented with 6 word-object mappings, children learned an average of 3 new words (50%) and that the number of objects learned by each child ranged from 2-4. Adults initially learned all of the new words but were variable in their later retention of the mappings, which ranged from 50-100%. We are currently examining differences in cooperative behavior during the Jenga playing game, including time spent discussing each move before it is made. Ongoing analyses are examining the social dynamics that might underlie the differences between words that were successfully learned and unlearned words for each dyad, as well as the developmental differences observed in the study. Additionally, the Jenga game is being used to better understand individual and developmental differences in social coordination during a cooperative task. At a behavioral level, the analysis maps periods of joint visual attention between participants during the word learning and the Jenga game, using head-mounted eye trackers to assess each participant’s first-person viewpoint during the session. We are also analyzing the coherence in brain activity between participants during novel word-learning and Jenga playing. The first hypothesis is that visual joint attention during the session will be positively correlated with both the number of words learned and with the number of blocks moved during Jenga before the tower falls. The next hypothesis is that successful communication of new words and success in the game will each be positively correlated with synchronized brain activity between the parent and child/the adult friends in cortical regions underlying social cognition, semantic processing, and visual processing. This study probes both the neural and behavioral mechanisms of learning and cooperation in a naturalistic, interactive and developmental context.Keywords: communication, cooperation, development, interaction, neuroscience
Procedia PDF Downloads 252295 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example
Authors: Alena Nesterenko, Svetlana Petrikova
Abstract:
Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation
Procedia PDF Downloads 205294 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease
Procedia PDF Downloads 276293 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery
Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal
Abstract:
Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT
Procedia PDF Downloads 231292 Ragging and Sludging Measurement in Membrane Bioreactors
Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd
Abstract:
Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.Keywords: clogging, membrane bioreactors, ragging, sludge
Procedia PDF Downloads 178291 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 134290 Absenteeism in Polytechnical University Studies: Quantification and Identification of the Causes at Universitat Politècnica de Catalunya
Authors: E. Mas de les Valls, M. Castells-Sanabra, R. Capdevila, N. Pla, Rosa M. Fernandez-Canti, V. de Medina, A. Mujal, C. Barahona, E. Velo, M. Vigo, M. A. Santos, T. Soto
Abstract:
Absenteeism in universities, including polytechnical universities, is influenced by a variety of factors. Some factors overlap with those causing absenteeism in schools, while others are specific to the university and work-related environments. Indeed, these factors may stem from various sources, including students, educators, the institution itself, or even the alignment of degree curricula with professional requirements. In Spain, there has been an increase in absenteeism in polytechnical university studies, especially after the Covid crisis, posing a significant challenge for institutions to address. This study focuses on Universitat Politècnica de Catalunya• BarcelonaTech (UPC) and aims to quantify the current level of absenteeism and identify its main causes. The study is part of the teaching innovation project ASAP-UPC, which aims to minimize absenteeism through the redesign of teaching methodologies. By understanding the factors contributing to absenteeism, the study seeks to inform the subsequent phases of the ASAP-UPC project, which involve implementing methodologies to minimize absenteeism and evaluating their effectiveness. The study utilizes surveys conducted among students and polytechnical companies. Students' perspectives are gathered through both online surveys and in-person interviews. The surveys inquire about students' interest in attending classes, skill development throughout their UPC experience, and their perception of the skills required for a career in a polytechnical field. Additionally, polytechnical companies are surveyed regarding the skills they seek in prospective employees. The collected data is then analyzed to identify patterns and trends. This analysis involves organizing and categorizing the data, identifying common themes, and drawing conclusions based on the findings. This mixed-method approach has revealed that higher levels of absenteeism are observed in large student groups at both the Bachelor's and Master's degree levels. However, the main causes of absenteeism differ between these two levels. At the Bachelor's level, many students express dissatisfaction with in-person classes, perceiving them as overly theoretical and lacking a balance between theory, experimental practice, and problem-solving components. They also find a lack of relevance to professional needs. Consequently, they resort to using online available materials developed during the Covid crisis and attending private academies for exam preparation instead. On the other hand, at the Master's level, absenteeism primarily arises from schedule incompatibility between university and professional work. There is a discrepancy between the skills highly valued by companies and the skills emphasized during the studies, aligning partially with students' perceptions. These findings are of theoretical importance as they shed light on areas that can be improved to offer a more beneficial educational experience to students at UPC. The study also has potential applicability to other polytechnic universities, allowing them to adapt the surveys and apply the findings to their specific contexts. By addressing the identified causes of absenteeism, universities can enhance the educational experience and better prepare students for successful careers in polytechnical fields.Keywords: absenteeism, polytechnical studies, professional skills, university challenges
Procedia PDF Downloads 68289 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection
Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten
Abstract:
Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection
Procedia PDF Downloads 335288 Mobile App versus Website: A Comparative Eye-Tracking Case Study of Topshop
Authors: Zofija Tupikovskaja-Omovie, David Tyler, Sam Dhanapala, Steve Hayes
Abstract:
The UK is leading in online retail and mobile adoption. However, there is a dearth of information relating to mobile apparel retail, and developing an understanding about consumer browsing and purchase behavior in m-retail channel would provide apparel marketers, mobile website and app developers with the necessary understanding of consumers’ needs. Despite the rapid growth of mobile retail businesses, no published study has examined shopping behaviour on fashion mobile websites and apps. A mixed method approach helped to understand why fashion consumers prefer websites on mobile devices, when mobile apps are also available. The following research methods were employed: survey, eye-tracking experiments, observation, and interview with retrospective think aloud. The mobile gaze tracking device by SensoMotoric Instruments was used to understand frustrations in navigation and other issues facing consumers in mobile channel. This method helped to validate and compliment other traditional user-testing approaches in order to optimize user experience and enhance the development of mobile retail channel. The study involved eight participants - females aged 18 to 35 years old, who are existing mobile shoppers. The participants used the Topshop mobile app and website on a smart phone to complete a task according to a specified scenario leading to a purchase. The comparative study was based on: duration and time spent at different stages of the shopping journey, number of steps involved and product pages visited, search approaches used, layout and visual clues, as well as consumer perceptions and expectations. The results from the data analysis show significant differences in consumer behaviour when using a mobile app or website on a smart phone. Moreover, two types of problems were identified, namely technical issues and human errors. Having a mobile app does not guarantee success in satisfying mobile fashion consumers. The differences in the layout and visual clues seem to influence the overall shopping experience on a smart phone. The layout of search results on the website was different from the mobile app. Therefore, participants, in most cases, behaved differently on different platforms. The number of product pages visited on the mobile app was triple the number visited on the website due to a limited visibility of products in the search results. Although, the data on traffic trends held by retailers to date, including retail sector breakdowns for visits and views, data on device splits and duration, might seem a valuable source of information, it cannot explain why consumers visit many product pages, stay longer on the website or mobile app, or abandon the basket. A comprehensive list of pros and cons was developed by highlighting issues for website and mobile app, and recommendations provided. The findings suggest that fashion retailers need to be aware of actual consumers’ behaviour on the mobile channel and their expectations in order to offer a seamless shopping experience. Added to which is the challenge of retaining existing and acquiring new customers. There seem to be differences in the way fashion consumers search and shop on mobile, which need to be explored in further studies.Keywords: consumer behavior, eye-tracking technology, fashion retail, mobile app, m-retail, smart phones, topshop, user experience, website
Procedia PDF Downloads 459287 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base
Authors: Joanna Wielochowska, Aneta Wielochowska
Abstract:
Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia
Procedia PDF Downloads 124286 Reviving the Past, Enhancing the Future: Preservation of Urban Heritage Connectivity as a Tool for Developing Liveability in Historical Cities in Jordan, Using Salt City as a Case Study
Authors: Sahar Yousef, Chantelle Niblock, Gul Kacmaz
Abstract:
Salt City, in the context of Jordan’s heritage landscape, is a significant case to explore when it comes to the interaction between tangible and intangible qualities of liveable cities. Most city centers, including Jerash, Salt, Irbid, and Amman, are historical locations. Six of these extraordinary sites were designated UNESCO World Heritage Sites. Jordan is widely acknowledged as a developing country characterized by swift urbanization and unrestrained expansion that exacerbate the challenges associated with the preservation of historic urban areas. The aim of this study is to conduct an examination and analysis of the existing condition of heritage connectivity within heritage city centers. This includes outdoor staircases, pedestrian pathways, footpaths, and other public spaces. Case study-style analysis of the urban core of As-Salt is the focus of this investigation. Salt City is widely acknowledged for its substantial tangible and intangible cultural heritage and has been designated as ‘The Place of Tolerance and Urban Hospitality’ by UNESCO since 2021. Liveability in urban heritage, particularly in historic city centers, incorporates several factors that affect our well-being; its enhancement is a critical issue in contemporary society. The dynamic interaction between humans and historical materials, which serves as a vehicle for the expression of their identity and historical narrative, constitutes preservation that transcends simple conservation. This form of engagement enables people to appreciate the diversity of their heritage recognising their previous and planned futures. Heritage preservation is inextricably linked to a larger physical and emotional context; therefore, it is difficult to examine it in isolation. Urban environments, including roads, structures, and other infrastructure, are undergoing unprecedented physical design and construction requirements. Concurrently, heritage reinforces a sense of affiliation with a particular location or space and unifies individuals with their ancestry, thereby defining their identity. However, a considerable body of research has focused on the conservation of heritage buildings in a fragmented manner without considering their integration within a holistic urban context. Insufficient attention is given to the significance of the physical and social roles played by the heritage staircases and baths that serve as connectors between these valued historical buildings. In doing so, the research uses a methodology that is based on consensus. Given that liveability is considered a complex matter with several dimensions. The discussion starts by making initial observations on the physical context and societal norms inside the urban center while simultaneously establishing the definitions of liveability and connectivity and examining the key criteria associated with these concepts. Then, identify the key elements that contribute to liveable connectivity within the framework of urban heritage in Jordanian city centers. Some of the outcomes that will be discussed in the presentation are: (1) There is not enough connectivity between heritage buildings as can be seen, for example, between buildings in Jada and Qala'. (2) Most of the outdoor spaces suffer from physical issues that hinder their use by the public, like in Salalem. (3) Existing activities in the city center are not well attended because of lack of communication between the organisers and the citizens.Keywords: connectivity, Jordan, liveability, salt city, tangible and intangible heritage, urban heritage
Procedia PDF Downloads 70285 Tip-Enhanced Raman Spectroscopy with Plasmonic Lens Focused Longitudinal Electric Field Excitation
Authors: Mingqian Zhang
Abstract:
Tip-enhanced Raman spectroscopy (TERS) is a scanning probe technique for individual objects and structured surfaces investigation that provides a wealth of enhanced spectral information with nanoscale spatial resolution and high detection sensitivity. It has become a powerful and promising chemical and physical information detection method in the nanometer scale. The TERS technique uses a sharp metallic tip regulated in the near-field of a sample surface, which is illuminated with a certain incident beam meeting the excitation conditions of the wave-vector matching. The local electric field, and, consequently, the Raman scattering, from the sample in the vicinity of the tip apex are both greatly tip-enhanced owning to the excitation of localized surface plasmons and the lightning-rod effect. Typically, a TERS setup is composed of a scanning probe microscope, excitation and collection optical configurations, and a Raman spectroscope. In the illumination configuration, an objective lens or a parabolic mirror is always used as the most important component, in order to focus the incident beam on the tip apex for excitation. In this research, a novel TERS setup was built up by introducing a plasmonic lens to the excitation optics as a focusing device. A plasmonic lens with symmetry breaking semi-annular slits corrugated on gold film was designed for the purpose of generating concentrated sub-wavelength light spots with strong longitudinal electric field. Compared to conventional far-field optical components, the designed plasmonic lens not only focuses an incident beam to a sub-wavelength light spot, but also realizes a strong z-component that dominants the electric field illumination, which is ideal for the excitation of tip-enhancement. Therefore, using a PL in the illumination configuration of TERS contributes to improve the detection sensitivity by both reducing the far-field background and effectively exciting the localized electric field enhancement. The FDTD method was employed to investigate the optical near-field distribution resulting from the light-nanostructure interaction. And the optical field distribution was characterized using an scattering-type scanning near-field optical microscope to demonstrate the focusing performance of the lens. The experimental result is in agreement with the theoretically calculated one. It verifies the focusing performance of the plasmonic lens. The optical field distribution shows a bright elliptic spot in the lens center and several arc-like side-lobes on both sides. After the focusing performance was experimentally verified, the designed plasmonic lens was used as a focusing component in the excitation configuration of TERS setup to concentrate incident energy and generate a longitudinal optical field. A collimated linearly polarized laser beam, with along x-axis polarization, was incident from the bottom glass side on the plasmonic lens. The incident light focused by the plasmonic lens interacted with the silver-coated tip apex and enhanced the Raman signal of the sample locally. The scattered Raman signal was gathered by a parabolic mirror and detected with a Raman spectroscopy. Then, the plasmonic lens based setup was employed to investigate carbon nanotubes and TERS experiment was performed. Experimental results indicate that the Raman signal is considerably enhanced which proves that the novel TERS configuration is feasible and promising.Keywords: longitudinal electric field, plasmonics, raman spectroscopy, tip-enhancement
Procedia PDF Downloads 373284 Modern Hybrid of Older Black Female Stereotypes in Hollywood Film
Authors: Frederick W. Gooding, Jr., Mark Beeman
Abstract:
Nearly a century ago, the groundbreaking 1915 film ‘The Birth of a Nation’ popularized the way Hollywood made movies with its avant-garde, feature-length style. The movie's subjugating and demeaning depictions of African American women (and men) reflected popular racist beliefs held during the time of slavery and the early Jim Crow era. Although much has changed concerning race relations in the past century, American sociologist Patricia Hill Collins theorizes that the disparaging images of African American women originating in the era of plantation slavery are adaptable and endure as controlling images today. In this context, a comparative analysis of the successful contemporary film, ‘Bringing Down the House’ starring Queen Latifah is relevant as this 2004 film was designed to purposely defy and ridicule classic stereotypes of African American women. However, the film is still tied to the controlling images from the past, although in a modern hybrid form. Scholars of race and film have noted that the pervasive filmic imagery of the African American woman as the loyal mammy stereotype faded from the screen in the post-civil rights era in favor of more sexualized characters (i.e., the Jezebel trope). Analyzing scenes and dialogue through the lens of sociological and critical race theory, the troubling persistence of African American controlling images in film stubbornly emerge in a movie like ‘Bringing Down the House.’ Thus, these controlling images, like racism itself, can adapt to new social and economic conditions. Although the classic controlling images appeared in the first feature length film focusing on race relations a century ago, ‘The Birth of a Nation,’ this black and white rendition of the mammy figure was later updated in 1939 with the classic hit, ‘Gone with the Wind’ in living color. These popular controlling images have loomed quite large in the minds of international audiences, as ‘Gone with the Wind’ is still shown in American theaters currently, and experts at the British Film Institute in 2004 rated ‘Gone with the Wind’ as the number one movie of all time in UK movie history based upon the total number of actual viewings. Critical analysis of character patterns demonstrate that images that appear superficially benign contribute to a broader and quite persistent pattern of marginalization within the aggregate. This approach allows experts and viewers alike to detect more subtle and sophisticated strands of racial discrimination that are ‘hidden in plain sight’ despite numerous changes in the Hollywood industry that appear to be more voluminous and diverse than three or four decades ago. In contrast to white characters, non-white or minority characters are likely to be subtly compromised or marginalized relative to white characters if and when seen within mainstream movies, rather than be subjected to obvious and offensive racist tropes. The hybrid form of both the older Jezebel and Mammy stereotypes exhibited by lead actress Queen Latifah in ‘Bringing Down the House’ represents a more suave and sophisticated merging of past imagery ideas deemed problematic in the past as well as the present.Keywords: African Americans, Hollywood film, hybrid, stereotypes
Procedia PDF Downloads 177283 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 502