Search results for: comparison of algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6957

Search results for: comparison of algorithms

1317 Reacting Numerical Simulation of Axisymmetric Trapped Vortex Combustors for Methane, Propane and Hydrogen

Authors: Heval Serhat Uluk, Sam M. Dakka, Kuldeep Singh, Richard Jefferson-Loveday

Abstract:

The carbon footprint of the aviation sector in total measured 3.8% in 2017, and it is expected to triple by 2050. New combustion approaches and fuel types are necessary to prevent this. This paper will focus on using propane, methane, and hydrogen as fuel replacements for kerosene and implement a trapped vortex combustor design to increase efficiency. Reacting simulations were conducted for axisymmetric trapped vortex combustor to investigate the static pressure drop, combustion efficiency and pattern factor for various cavity aspect ratios for 0.3, 0.6 and 1 and air mass flow rates for 14 m/s, 28 m/s and 42 m/s. Propane, methane and hydrogen are used as alternative fuels. The combustion model was anchored based on swirl flame configuration with an emphasis on high fidelity of boundary conditions with favorable results of eddy dissipation model implementation. Reynolds Averaged Navier Stokes (RANS) k-ε model turbulence model for the validation effort was used for turbulence modelling. A grid independence study was conducted for the three-dimensional model to reduce computational time. Preliminary results for 24 m/s air mass flow rate provided a close temperature profile inside the cavity relative to the experimental study. The investigation will be carried out on the effect of air mass flow rates and cavity aspect ratio on the combustion efficiency, pattern factor and static pressure drop in the combustor. A comparison study among pure methane, propane and hydrogen will be conducted to investigate their suitability for trapped vortex combustors and conclude their advantages and disadvantages as a fuel replacement. Therefore, the study will be one of the milestones to achieving 2050 zero carbon emissions or reducing carbon emissions.

Keywords: computational fluid dynamics, aerodynamic, aerospace, propulsion, trapped vortex combustor

Procedia PDF Downloads 88
1316 Study of the Adsorptives Properties of Zeolites X Exchanged by the Cations Cu2 + and/or Zn2+

Authors: H. Hammoudi, S. Bendenia, I. Batonneau-Gener, A. Khelifa

Abstract:

Applying growing zeolites is due to their intrinsic physicochemical properties: a porous structure, regular, generating a large free volume, a high specific surface area, acidic properties of interest to the origin of their activity, selectivity energy and dimensional, leading to a screening phenomenon, hence the name of molecular sieves is generally attributed to them. Most of the special properties of zeolites have been valued as direct applications such as ion exchange, adsorption, separation and catalysis. Due to their crystalline structure stable, their large pore volume and their high content of cation X zeolites are widely used in the process of adsorption and separation. The acidic properties of zeolites X and interesting selectivity conferred on them their porous structure is also have potential catalysts. The study presented in this manuscript is devoted to the chemical modification of an X zeolite by cation exchange. Ion exchange of zeolite NaX by Zn 2 + cations and / or Cu 2 + is gradually conducted by following the evolution of some of its characteristics: crystallinity by XRD, micropore volume by nitrogen adsorption. Once characterized, the different samples will be used for the adsorption of propane and propylene. Particular attention is paid thereafter, on the modeling of adsorption isotherms. In this vein, various equations of adsorption isotherms and localized mobile, some taking into account the adsorbate-adsorbate interactions, are used to describe the experimental isotherms. We also used the Toth equation, a mathematical model with three parameters whose adjustment requires nonlinear regression. The last part is dedicated to the study of acid properties of Cu (x) X, Zn (x) X and CuZn (x) X, with the adsorption-desorption of pyridine followed by IR. The effect of substitution at different rates of Na + by Cu2 + cations and / or Zn 2 +, on the crystallinity and on the textural properties was treated. Some results on the morphology of the crystallites and the thermal effects during a temperature rise, obtained by scanning electron microscopy and DTA-TGA thermal analyzer, respectively, are also reported. The acidity of our different samples was also studied. Thus, the nature and strength of each type of acidity are estimated. The evaluation of these various features will provide a comparison between Cu (x) X, Zn (x) X and CuZn (x) X. One study on adsorption of C3H8 and C3H6 in NaX, Cu (x) X , Zn (x) x and CuZn (x) x has been undertaken.

Keywords: adsorption, acidity, ion exchange, zeolite

Procedia PDF Downloads 195
1315 Effect of Helical Flow on Separation Delay in the Aortic Arch for Different Mechanical Heart Valve Prostheses by Time-Resolved Particle Image Velocimetry

Authors: Qianhui Li, Christoph H. Bruecker

Abstract:

Atherosclerotic plaques are typically found where flow separation and variations of shear stress occur. Although helical flow patterns and flow separations have been recorded in the aorta, their relation has not been clearly clarified and especially in the condition of artificial heart valve prostheses. Therefore, an experimental study is performed to investigate the hemodynamic performance of different mechanical heart valves (MHVs), i.e. the SJM Regent bileaflet mechanical heart valve (BMHV) and the Lapeyre-Triflo FURTIVA trileaflet mechanical heart valve (TMHV), in a transparent model of the human aorta under a physiological pulsatile right-hand helical flow condition. A typical systolic flow profile is applied in the pulse-duplicator to generate a physiological pulsatile flow which thereafter flows past an axial turbine blade structure to imitate the right-hand helical flow induced in the left ventricle. High-speed particle image velocimetry (PIV) measurements are used to map the flow evolution. A circular open orifice nozzle inserted in the valve plane as the reference configuration initially replaces the valve under investigation to understand the hemodynamic effects of the entered helical flow structure on the flow evolution in the aortic arch. Flow field analysis of the open orifice nozzle configuration illuminates the helical flow effectively delays the flow separation at the inner radius wall of the aortic arch. The comparison of the flow evolution for different MHVs shows that the BMHV works like a flow straightener which re-configures the helical flow pattern into three parallel jets (two side-orifice jets and the central orifice jet) while the TMHV preserves the helical flow structure and therefore prevent the flow separation at the inner radius wall of the aortic arch. Therefore the TMHV is of better hemodynamic performance and reduces the pressure loss.

Keywords: flow separation, helical aortic flow, mechanical heart valve, particle image velocimetry

Procedia PDF Downloads 173
1314 Effects of Spirulina Platensis Powder on Nutrition Value, Sensory and Physical Properties of Four Different Food Products

Authors: Yazdan Moradi

Abstract:

Spirulina platensis is a blue-green microalga with unique nutrient content and has many nutritional and therapeutic effects that are used to enrich various foods. The purpose of this research was to investigate the effect of Spirulina platensis microalgae on the nutritional value and sensory and physical properties of four different cereal-based products. For this purpose, spirulina microalgae dry powder with amounts of 0.25, 0.5, 0.75, and 1 is added to the formula of pasta, bulk bread, layered sweets, and cupcakes. A sample without microalgae powder of each product is also considered as a control. The results showed that adding Spirulina powder to the formulation of selected foods significantly changed the nutrition value and sensory and physical characteristics. Comparison to control protein increased in the samples containing spirulina powder. The increase in protein was about 1, 0.6, 1.2 and 1.1 percent in bread, cake, layered sweets and Pasta, respectively. The iron content of samples, including Spirulina, also increased. The increase was 0.6, 2, 5 and 18 percent in bread, cake, layered sweets and Pasta respectively. Sensory evaluation analysis showed that all products had an acceptable acceptance score. The instrumental analysis of L*, a*, and b* color indices showed that the increase of spirulina caused green color in the treatments, and this color change is more significant in the bread and pasta samples. The results of texture analysis showed that adding spirulina to selected food products reduces the hardness of the samples. No significant differences were observed in fat content in samples, including spirulina samples and control. However, fatty acid content and a trace amount of EPA found in samples included 1% spirulina. Added spirulina powder to food ingredients also changed the amino acid profile, especially essential amino acids. An increase of histidine, isoleucine, leucine, tryptophan, and valine in samples, including Spirulina was observed.

Keywords: spirulina, nutrition, Alge, iron, food

Procedia PDF Downloads 33
1313 Characterization of the Ignitability and Flame Regression Behaviour of Flame Retarded Natural Fibre Composite Panel

Authors: Timine Suoware, Sylvester Edelugo, Charles Amgbari

Abstract:

Natural fibre composites (NFC) are becoming very attractive especially for automotive interior and non-structural building applications because they are biodegradable, low cost, lightweight and environmentally friendly. NFC are known to release high combustible products during exposure to heat atmosphere and this behaviour has raised concerns to end users. To improve on their fire response, flame retardants (FR) such as aluminium tri-hydroxide (ATH) and ammonium polyphosphate (APP) are incorporated during processing to delay the start and spread of fire. In this paper, APP was modified with Gum Arabic powder (GAP) and synergized with carbon black (CB) to form new FR species. Four FR species at 0, 12, 15 and 18% loading ratio were added to oil palm fibre polyester composite (OPFC) panels as follows; OPFC12%APP-GAP, OPFC15%APP-GAP/CB, OPFC18%ATH/APP-GAP and OPFC18%ATH/APPGAP/CB. The panels were produced using hand lay-up compression moulding and cured at room temperature. Specimens were cut from the panels and these were tested for ignition time (Tig), peak heat released rate (HRRp), average heat release rate (HRRavg), peak mass loss rate (MLRp), residual mass (Rm) and average smoke production rate (SPRavg) using cone calorimeter apparatus as well as the available flame energy (ɸ) in driving the flame using radiant panel flame spread apparatus. From the ignitability data obtained at 50 kW/m2 heat flux (HF), it shows that the hybrid FR modified with APP that is OPFC18%ATH/APP-GAP exhibited superior flame retardancy and the improvement was based on comparison with those without FR which stood at Tig = 20 s, HRRp = 86.6 kW/m2, HRRavg = 55.8 kW/m2, MLRp =0.131 g/s, Rm = 54.6% and SPRavg = 0.05 m2/s representing respectively 17.6%, 67.4%, 62.8%, 50.9%, 565% and 62.5% improvements less than those without FR (OPFC0%). In terms of flame spread, the least flame energy (ɸ) of 0.49 kW2/s3 for OPFC18%ATH/APP-GAP caused early flame regression. This was less than 39.6 kW2/s3 compared to those without FR (OPFC0%). It can be concluded that hybrid FR modified with APP could be useful in the automotive and building industries to delay the start and spread of fire.

Keywords: flame retardant, flame regression, oil palm fibre, composite panel

Procedia PDF Downloads 127
1312 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 226
1311 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms

Authors: Deepti Kohli, Meeta Keswani Mehra

Abstract:

The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.

Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates

Procedia PDF Downloads 329
1310 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 380
1309 Enhanced Poly Fluoroalkyl Substances Degradation in Complex Wastewater Using Modified Continuous Flow Nonthermal Plasma Reactor

Authors: Narasamma Nippatlapallia

Abstract:

Communities across the world are desperate to get their environment free of toxic per-poly fluoroalkyl substances (PFAS) especially when these chemicals are in aqueous media. In the present study, two different chain length PFAS (PFHxA (C6), PFDA (C10)) are selected for degradation using a modified continuous flow nonthermal plasma. The results showed 82.3% PFHxA and 94.1 PFDA degradation efficiencies, respectively. The defluorination efficiency is also evaluated which is 28% and 34% for PFHxA and PFDA, respectively. The results clearly indicates that the structure of PFAS has a great impact on degradation efficiency. The effect of flow rate is studied. increase in flow rate beyond 2 mL/min, decrease in degradation efficiency of the targeted PFAS was noticed. PFDA degradation was decreased from 85% to 42%, and PFHxA was decreased to 32% from 64% with increase in flow rate from 2 to 5 mL/min. Similarly, with increase in flow rate the percentage defluorination was decreased for both C10, and C6 compounds. This observation can be attributed to mainly because of change in residence time (contact time). Real water/wastewater is a composition of various organic, and inorganic ions that may affect the activity of oxidative species such as 𝑂𝐻. radicals on the target pollutants. Therefore, it is important to consider radicals quenching chemicals to understand the efficiency of the reactor. In gas-liquid NTP discharge reactors 𝑂𝐻. , 𝑒𝑎𝑞 − , 𝑂 . , 𝑂3, 𝐻2𝑂2, 𝐻. are often considered as reactive species for oxidation and reduction of pollutants. In this work, the role played by two distinct 𝑂 .𝐻 Scavengers, ethanol and glycerol, on PFAS percentage degradation, and defluorination efficiency (i,e., fluorine removal) are measured was studied. The addition of scavenging agents to the PFAS solution diminished the PFAS degradation to different extents depending on the target compound molecular structure. In comparison with the degradation of only PFAS solution, the addition of 1.25 M ethanol inhibited C10, and C6 degradation by 8%, and 12%, respectively. This research was supported with energy efficiency, production rate, and specific yield, fluoride, and PFAS concentration analysis with respect to optimum hydraulic retention time (HRT) of the continuous flow reactor.

Keywords: wastewater, PFAS, nonthermal plasma, mineralization, defluorination

Procedia PDF Downloads 28
1308 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 71
1307 Team-Theatre as a Tool of Occupational Safety Awareness

Authors: Fiorenza Misale

Abstract:

The painful phenomenon of so-called white deaths and accidents at work, unfortunately, is always current. The key is to act on the culture of security through effective measures of attitudes and behaviors that go far beyond the knowledge and the know-how. It is necessary that there is an ‘introjection’ of safety culture through the conscious involvement of all workers. The legislation on work safety identifies the main tool to promote the culture of safety at work and prevention within the workplace. In law the term education is used to distinguish itself from the information with which they will simply theoretically transmit, and from the training with which they will provide the practical skills. The new decree fact fills several gaps in previous legislation and stresses the importance of training in the workplace, that is, the main activity through which it is possible to achieve the active participation of all workers in the company’s prevention system. This system is built only through the dissemination of risk information, the circulation of information, comparison and dialogue between all actors involved that are the necessary elements for a correct transmission of the culture of worker safety. Training activity should put the focus on work experience in order to bring out all the knowledge needed to identify and assess the risks in the work place, and especially the action to eliminate or control them, integrating, when necessary, the missing knowledge. In addition to traditional training and information systems can be utilized for the purpose of training that are able to affect both one emotionally and aesthetically, team-theatre is one of them. Among the methods of company theater that can be used in work safety we have: Lesson show, theater workshop, improvised theater, forum theater, theater playback. The theater can represent a complementary approach to traditional training and give information on safety measures, demonstrating that there are more engaging outreach tools. Team-theatre allows identification with the characters, a transmission of emotions and moods and it is through the staging of a story that the individual processes new information. It’ also s a means of experiential training that allows you to work with your mind, body, emotions.The aim of one work is the use of corporate theater on the personnel working in the health sector. Through a questionnaire we are able to analyze the knowledge of occupational safety and current risks; in particular in health care which is to be administered before and after the play.

Keywords: theater, training, occupational health, safety

Procedia PDF Downloads 271
1306 The Social Impact of Green Buildings

Authors: Elise Machline

Abstract:

Policy instruments have been developed worldwide to reduce the energy demand of buildings. Two types of such instruments have been green building rating systems and energy efficiency standards for buildings -such as Green Star (Australia), LEED (United States, Leadership in Energy and Environmental Design), Energy Star (United States), and BREEAM (United Kingdom, Building Research Establishment Environmental Assessment Method). The popularity of the idea of sustainable development has allowed the actors to consider the potential value generated by the environmental performance of buildings, labeled “green value” in the literature. Sustainable performances of buildings are expected to improve their attractiveness, increasing their value. A growing number of empirical studies demonstrate that green buildings yield rental/sale premia, as well as higher occupancy rates and thus higher asset values. The results suggest that green buildings are not affordable to all and that their construction tends to have a gentrifying effect. An increasing number of countries are institutionalizing green strategies for affordable housing. In that sense, making green buildings affordable to all will depend on government policies. That research aims to investigate whether green building fosters inequality in Israel, under the banner of sustainability. The method is comparison (of the market value). This method involves comparing the green buildings sale prices with non-certified buildings of the same type that have undergone recent transactions. The “market value” is deduced from those sources by analogy. The results show that, in Israel, green building projects are usually addressed to the middle to upper classes. The green apartment’s sale premium is about 19% (comparing to non-certified dwelling). There is a link between energy and/or environmental performance and the financial value of the dwellings. Moreover, price differential is much higher than the value of energy savings. This perpetuates socio-spatial and socio-economic inequality as well as ecological vulnerability for the poor and other socially marginal groups. Moreover, there are no green affordable housings and the authorities do not subsidy green building or retrofitting.

Keywords: green building, gentrification, social housing, green value, green building certification

Procedia PDF Downloads 417
1305 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 89
1304 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 108
1303 Comparison of Bismuth-Based Nanoparticles as Radiosensitization Agents for Radiotherapy

Authors: Merfat Algethami, Anton Blencowe, Bryce Feltis, Stephen Best, Moshi Geso

Abstract:

Nano-materials with high atomic number atoms have been demonstrated to enhance the effective radiation dose and thus potentially could improve therapeutic efficacy in radiotherapy. The optimal nanoparticulate agents require high X-ray absorption coefficients, low toxicity, and should be cost effective. The focus of our research is the development of a nanoparticle therapeutic agent that can be used in radiotherapy to provide optimal enhancement of the radiation effects on the target. In this study, we used bismuth (Bi) nanoparticles coated with starch and bismuth sulphide nanoparticles (Bi2S3) coated with polyvinylpyrrolidone (PVP). These NPs are of low toxicity and are one of the least expensive heavy metal-based nanoparticles. The aims of this study were to synthesise Bi2S3 and Bi NPs, and examine their cytotoxicity to human lung adenocarcinoma epithelial cells (A549). The dose enhancing effects of NPs on A549 cells were examined at both KV and MV energies. The preliminary results revealed that bismuth based nanoparticles show increased radio-sensitisation of cells, displaying dose enhancement with KV X-ray energies and to a lesser degree for the MV energies. We also observed that Bi NPs generated a greater dose enhancement effect than Bi2S3 NPs in irradiated A549 cells. The maximum Dose Enhancement Factor (DEF) was obtained at lower energy KV range when cells treated with Bi NPs (1.5) compared to the DEF of 1.2 when cells treated with Bi2S3NPs. Less radiation dose enhancement was observed when using high energy MV beam with higher DEF value of Bi NPs treatment (1.26) as compared to 1.06 DEF value with Bi2S3 NPs. The greater dose enhancement was achieved at KV energy range, due the effect of the photoelectric effect which is the dominant process of interaction of X-ray. The cytotoxic effect of Bi NPs on enhancing the X-ray dose was higher due to the higher amount of elemental Bismuth present in Bi NPs compared to Bi2S3 NPs. The results suggest that Bismuth based NPs can be considered as valuable dose enhancing agents when used in clinical applications.

Keywords: A549 lung cancer cells, Bi2S3 nanoparticles, dose enhancement effect, radio-sensitising agents

Procedia PDF Downloads 270
1302 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 131
1301 An Alternative Semi-Defined Larval Diet for Rearing of Sand Fly Species Phlebotomus argentipes in Laboratory

Authors: Faizan Hassan, Seema Kumari, V. P. Singh, Pradeep Das, Diwakar Singh Dinesh

Abstract:

Phlebotomus argentipes is an established vector for Visceral Leishmaniasis in Indian subcontinent. Laboratory colonization of Sand flies is imperative in research on vectors, which requires a proper diet for their larvae and adult growth that ultimately affects their survival and fecundity. In most of the laboratories, adult Sand flies are reared on rabbit blood feeding/artificial blood feeding and their larvae on fine grinded rabbit faeces as a sole source of food. Rabbit faeces are unhygienic, difficult to handle, high mites infestation as well as owing to bad odour which creates menacing to human users ranging from respiratory problems to eye infection and most importantly it does not full fill all the nutrients required for proper growth and development. It is generally observed that the adult emergence is very low in comparison to egg hatched, which may be due to insufficient food nutrients provided to growing larvae. To check the role of food nutrients on larvae survival and adult emergence, a high protein rich artificial diet for sand fly larvae were used in this study. The composition of artificial diet to be tested includes fine grinded (9 gm each) Rice, Pea nuts & Soyabean balls. These three food ingredients are rich source of all essential amino acids along with carbohydrate and minerals which is essential for proper metabolism and growth. In this study artificial food was found significantly more effective for larval development and adult emergence than rabbit faeces alone (P value >0.05). The weight of individual larvae was also found higher in test pots than the control. This study suggest that protein plays an important role in insect larvae development and adding carbohydrate will also enhances the fecundity of insects larvae.

Keywords: artificial food, nutrients, Phlebotomus argentipes, sand fly

Procedia PDF Downloads 304
1300 The Youth Employment Peculiarities in Post-Soviet Georgia

Authors: M. Lobzhanidze, N. Damenia

Abstract:

The article analyzes the current structural changes in the economy of Georgia, liberalization and integration processes of the economy. In accordance with this analysis, the peculiarities and the problems of youth employment are revealed. In the paper, the Georgian labor market and its contradictions are studied. Based on the analysis of materials, the socio-economic losses caused by the long-term and mass unemployment of young people are revealed, the objective and subjective circumstances of getting higher education are studied. The youth employment and unemployment rates are analyzed. Based on the research, the factors that increase unemployment are identified. According to the analysis of the youth employment, it has appeared that the unemployment share in the number of economically active population has increased in the younger age group. It demonstrates the high requirements of the labour market in terms of the quality of the workforce. Also, it is highlighted that young people are exposed to a highly paid job. The following research methods are applied in the presented paper: statistical (selection, grouping, observation, trend, etc.) and qualitative research (in-depth interview), as well as analysis, induction and comparison methods. The article presents the data by the National Statistics Office of Georgia and the Ministry of Agriculture of Georgia, policy documents of the Parliament of Georgia, scientific papers by Georgian and foreign scientists, analytical reports, publications and EU research materials on similar issues. The work estimates the students and graduates employment problems existing in the state development strategy and priorities. The measures to overcome the challenges are defined. The article describes the mechanisms of state regulation of youth employment and the ways of improving this regulatory base. As for major findings, it should be highlighted that the main problems are: lack of experience and incompatibility of youth qualification with the requirements of the labor market. Accordingly, it is concluded that the unemployment rate of young people in Georgia is increasing.

Keywords: migration of youth, youth employment, migration management, youth employment and unemployment

Procedia PDF Downloads 148
1299 Higher Education and the Economy in Western Canada: Is Institutional Autonomy at Risk?

Authors: James Barmby

Abstract:

Canada’s westernmost provinces of British Columbia and Alberta are similar in many respects as they are both reliant on volatile natural resources for major portions of their economies. The two provinces have banded together to develop mutually beneficial trade, investment and labour market mobility rules, but in terms of developing systems of higher education, the two provinces are attempting to align higher education programs to economic development objectives by means that are quite different. In British Columbia, the recently announced initiative, B.C’s Skills for Jobs Blueprint will “make sure education and training programs are aligned with the demands of the labor market.” Meanwhile in Alberta, the province’s institutions of higher education are enjoying the tenth year of their membership in the Campus Alberta Quality Council, which makes recommendations to government on issues related to post-secondary education, including the approval of new programs. In B.C., public institutions of higher education are encouraged to comply with government objectives, and are rewarded with targeted funds for their efforts. In Alberta, the institutions as a system tell the government what programs they want to offer and government can agree or not agree to fund these programs through a ministerial approval process. In comparing the two higher education systems, the question emerges as to which one is more beneficial to the province: the one where change is directed primarily by financial incentives to achieve economic objectives or the one that makes recommendations to the government for changes in programs to achieve institutional objectives? How is institutional autonomy affected in each strategy? Does institutional autonomy matter anymore? In recent years, much has been written in regard to academic freedom, but less about institutional autonomy, which is seen by many as essential to protecting academic freedom. However, while institutional autonomy means freedom from government control, it does not necessarily mean self-government. In this study, a comparison of the two higher education systems is made using recent government policy initiatives in both provinces, and responses to those actions by the higher education institutions. The findings indicate that the economic needs in both provinces take precedence over issues of institutional autonomy.

Keywords: alberta, British Columbia, institutional autonomy, funding

Procedia PDF Downloads 701
1298 Molecular Interactions between Vicia Faba L. Cultivars and Plant Growth Promoting Rhizobacteria (PGPR), Utilized as Yield Enhancing 'Plant Probiotics'

Authors: Eleni Stefanidou, Nikolaos Katsenios, Ioanna Karamichali, Aspasia Efthimiadou, Panagiotis Madesis

Abstract:

The excessive use of pesticides and fertilizers has significant environmental and human health-related negative effects. In the frame of the development of sustainable agriculture practices, especially in the context of extreme environmental changes (climate change), it is important to develop alternative practices to increase productivity and biotic and abiotic stress tolerance. Beneficial bacteria, such as symbiotic bacteria in legumes (rhizobia) and symbiotic or free-living Plant Growth Promoting Rhizobacteria (PGPR), which could act as "plant probiotics", can promote plant growth and significantly increase the resistance of crops under adverse environmental conditions. In this study, we explored the symbiotic relationships between Faba bean (Vicia faba L.) cultivars with different PGPR bacteria, aiming to identify the possible influence on yield and biotic-abiotic phytoprotection benefits. Transcriptomic analysis of root and whole plant samples was executed for two Vicia faba L. cultivars (Polikarpi and Solon) treated with selected PGPR bacteria (6 treatments: B. subtilis + Rhizobium-mixture, A. chroococcum + Rhizobium-mixture, B. subtilis, A. chroococcum and Rhizobium-mixture). Preliminary results indicate a significant yield (Seed weight and Total number of pods) increase in both varieties, ranging around 25%, in comparison to the control, especially for the Solon cultivar. The increase was observed for all treatments, with the B. subtilis + Rhizobium-mixture treatment being the highest performing. The correlation of the physiological and morphological data with the transcriptome analysis revealed molecular mechanisms and molecular targets underlying the observed yield increase, opening perspectives for the use of nitrogen-fixing bacteria as a natural, more ecological enhancer of legume crop productivity.

Keywords: plant probiotics, PGPR, legumes, sustainable agriculture

Procedia PDF Downloads 78
1297 Field Emission Scanning Microscope Image Analysis for Porosity Characterization of Autoclaved Aerated Concrete

Authors: Venuka Kuruwita Arachchige Don, Mohamed Shaheen, Chris Goodier

Abstract:

Aerated autoclaved concrete (AAC) is known for its lightweight, easy handling, high thermal insulation, and extremely porous structure. Investigation of pore behavior in AAC is crucial for characterizing the material, standardizing design and production techniques, enhancing the mechanical, durability, and thermal performance, studying the effectiveness of protective measures, and analyzing the effects of weather conditions. The significant details of pores are complicated to observe with acknowledged accuracy. The High-resolution Field Emission Scanning Electron Microscope (FESEM) image analysis is a promising technique for investigating the pore behavior and density of AAC, which is adopted in this study. Mercury intrusion porosimeter and gas pycnometer were employed to characterize porosity distribution and density parameters. The analysis considered three different densities of AAC blocks and three layers in the altitude direction within each block. A set of understandings was presented to extract and analyze the details of pore shape, pore size, pore connectivity, and pore percentages from FESEM images of AAC. Average pore behavior outcomes per unit area were presented. Comparison of porosity distribution and density parameters revealed significant variations. FESEM imaging offered unparalleled insights into porosity behavior, surpassing the capabilities of other techniques. The analysis conducted from a multi-staged approach provides porosity percentage occupied by various pore categories, total porosity, variation of pore distribution compared to AAC densities and layers, number of two-dimensional and three-dimensional pores, variation of apparent and matrix densities concerning pore behaviors, variation of pore behavior with respect to aluminum content, and relationship among shape, diameter, connectivity, and percentage in each pore classification.

Keywords: autoclaved aerated concrete, density, imaging technique, microstructure, porosity behavior

Procedia PDF Downloads 66
1296 A Step Towards Circular Economy: Assessing the Efficacy of Ion Exchange Resins in the Recycling of Automotive Engine Coolants

Authors: George Madalin Danila, Mihaiella Cretu, Cristian Puscasu

Abstract:

The recycling of used antifreeze/coolant is a widely discussed and intricate issue. Complying with government regulations for the proper disposal of hazardous waste poses a significant challenge for today's automotive and industrial industries. In recent years, global focus has shifted toward Earth's fragile ecology, emphasizing the need to restore and preserve the natural environment. The business and industrial sectors have undergone substantial changes to adapt and offer products tailored to these evolving markets. The global antifreeze market size was evaluated at US 5.4 billion in 2020 to reach USD 5,9 billion by 2025 due to the increased number of vehicles worldwide, but also to the growth of HVAC systems. This study presents the evaluation of an ion exchange resin-based installation designed for the recycling of engine coolants, specifically ethylene glycol (EG) and propylene glycol (PG). The recycling process aims to restore the coolant to meet the stringent ASTM standards for both new and recycled coolants. A combination of physical-chemical methods, gas chromatography-mass spectrometry (GC-MS), and inductively coupled plasma mass spectrometry (ICP-MS) was employed to analyze and validate the purity and performance of the recycled product. The experimental setup included performance tests, namely corrosion to glassware and the tendency to foaming of coolant, to assess the efficacy of the recycled coolants in comparison to new coolant standards. The results demonstrate that the recycled EG coolants exhibit comparable quality to new coolants, with all critical parameters falling within the acceptable ASTM limits. This indicates that the ion exchange resin method is a viable and efficient solution for the recycling of engine coolants, offering an environmentally friendly alternative to the disposal of used coolants while ensuring compliance with industry standards.

Keywords: engine coolant, glycols, recycling, ion exchange resin, circular economy

Procedia PDF Downloads 42
1295 Military Leadership: Emotion Culture and Emotion Coping in Morally Stressful Situations

Authors: Sofia Nilsson, Alicia Ohlsson, Linda-Marie Lundqvist, Aida Alvinius, Peder Hyllengren, Gerry Larsson

Abstract:

In irregular warfare contexts, military personnel are often presented with morally ambiguous situations where they are aware of the morally correct choice but may feel prevented to follow through with it due to organizational demands. Moral stress and/or injury can be the outcome of the individual’s experienced dissonance. These types of challenges put a large demand on the individual to manage their own emotions and the emotions of others, particularly in the case of a leader. Both the ability and inability for emotional regulation can result in different combinations of short and long term reactions after morally stressful events, which can be either positive or negative. Our study analyzed the combination of these reactions based upon the types of morally challenging events that were described by the subjects. 1)What institutionalized norms concerning emotion regulation are favorable in short-and long-term perspectives after a morally stressful event? 2)What individual emotion-focused coping strategies are favorable in short-and long-perspectives after a morally stressful? To address these questions, we conducted a quantitative study in military contexts in Sweden and Norway on upcoming or current military officers (n=331). We tested a theoretical model built upon a recently developed qualitative study. The data was analyzed using factor analysis, multiple regression analysis and subgroup analyses. The results indicated that an individual’s restriction of emotion in order to achieve an organizational goal, which results in emotional dissonance, can be an effective short term strategy for both the individual and the organization; however, it appears to be unfavorable in a long-term perspective which can result in negative reactions. Our results are intriguing because they showed an increased percentage of reported negative long term reactions (13%), which indicated PTSD-related symptoms in comparison to previous Swedish studies which indicated lower PTSD symptomology.

Keywords: emotion culture, emotion coping, emotion management, military

Procedia PDF Downloads 596
1294 Finite Element Analysis of Shape Memory Alloy Stents in Coronary Arteries

Authors: Amatulraheem Al-Abassi, K. Khanafer, Ibrahim Deiab

Abstract:

The coronary artery stent is a promising technology that can treat various coronary diseases. Materials used for manufacturing medical stents should have high biocompatible properties. Stent alloys, in particular, are remarkably promising good clinical outcomes, however, there is threaten of restenosis (reoccurring of artery narrowing due to fatty plaque), stent recoiling, or in long-term the occurrence of stent fracture. However, stents that are made of Nickel-titanium (Nitinol) can bare extensive plastic deformation and resist restenosis. This shape memory alloy has outstanding mechanical properties. Nitinol is a unique shape memory alloy as it has unique mechanical properties such as; biocompatibility, super-elasticity, and recovery to original shape under certain loads. Stent failure may cause complications in vascular diseases and possibly blockage of blood flow. Thus, studying the behaviors of the stent under different medical conditions will help the doctors and cardiologists to predict when it is necessary to change the stent in order to prevent any severe morbidity outcomes. To the best of our knowledge, there are limited published papers that analyze the stent behavior with regards to the contact surfaces of plaque layer and blood vessel. Thus, stent material properties will be discussed in this investigation to highlight the mechanical and clinical differences between various stents. This research analyzes the performance of Nitinol stent in well-known stent design to determine its bearing with stress and its dislocation in blood vessels, in comparison to stents made of different biocompatible materials. In addition, a study of its performance will be represented in the system. Finite Element Analysis is the core of this study. Thus, a physical representative model will be discussed to show the distribution of stress and strain along the interaction surface between the stent and the artery. The reaction of vascular tissue to the stent will be evaluated to predict the possibility of restenosis within the treated area.

Keywords: shape memory alloy, stent, coronary artery, finite element analysis

Procedia PDF Downloads 200
1293 A Model of Human Security: A Comparison of Vulnerabilities and Timespace

Authors: Anders Troedsson

Abstract:

For us humans, risks are intimately linked to human vulnerabilities - where there is vulnerability, there is potentially insecurity, and risk. Reducing vulnerability through compensatory measures means increasing security and decreasing risk. The paper suggests that a meaningful way to approach the study of risks (including threats, assaults, crisis etc.), is to understand the vulnerabilities these external phenomena evoke in humans. As is argued, the basis of risk evaluation, as well as responses, is the more or less subjective perception by the individual person, or a group of persons, exposed to the external event or phenomena in question. This will be determined primarily by the vulnerability or vulnerabilities that the external factor are perceived to evoke. In this way, risk perception is primarily an inward dynamic, rather than an outward one. Therefore, a route towards an understanding of the perception of risks, is a closer scrutiny of the vulnerabilities which they can evoke, thereby approaching an understanding of what in the paper is called the essence of risk (including threat, assault etc.), or that which a certain perceived risk means to an individual or group of individuals. As a necessary basis for gauging the wide spectrum of potential risks and their meaning, the paper proposes a model of human vulnerabilities, drawing from i.a. a long tradition of needs theory. In order to account for the subjectivity factor, which mediates between the innate vulnerabilities on the one hand, and the event or phenomenon out there on the other hand, an ensuing ontological discussion about the timespace characteristics of risk/threat/assault as perceived by humans leads to the positing of two dimensions. These two dimensions are applied on the vulnerabilities, resulting in a modelling effort featuring four realms of vulnerabilities which are related to each other and together represent a dynamic whole. In approaching the problem of risk perception, the paper thus defines the relevant realms of vulnerabilities, depicting them as a dynamic whole. With reference to a substantial body of literature and a growing international policy trend since the 1990s, this model is put in the language of human security - a concept relevant not only for international security studies and policy, but also for other academic disciplines and spheres of human endeavor.

Keywords: human security, timespace, vulnerabilities, risk perception

Procedia PDF Downloads 335
1292 Load Comparison between Different Positions during Elite Male Basketball Games: A Sport Metabolomics Approach

Authors: Kayvan Khoramipour, Abbas Ali Gaeini, Elham Shirzad, Øyvind Sandbakk

Abstract:

Basketball has different positions with individual movement profiles, which may influence metabolic demands. Accordingly, the present study aimed to compare the movement and metabolic load between different positions during elite male basketball games. Five main players of 14 teams (n = 70), who participated in the 2017-18 Iranian national basketball leagues, were selected as participants. The players were defined as backcourt (Posts 1-3) and frontcourt (Posts 4-5). Video based time motion analysis (VBTMA) was performed based on players’ individual running and shuffling speed using Dartfish software. Movements were classified into high and low intensity running with and without having the ball, as well as high and low-intensity shuffling and static movements. Mean frequency, duration, and distance were calculated for each class, except for static movements where only frequency was calculated. Saliva samples were collected from each player before and after 40-minute basketball games and analyzed using metabolomics. Principal component analysis (PCA) and Partial least square discriminant analysis (PLSDA) (for metabolomics data) and independent T-tests (for VBTMA) were used as statistical tests. Movement frequency, duration, and distance were higher in backcourt players (all p ≤ 0.05), while static movement frequency did not differ. Saliva samples showed that the levels of Taurine, Succinic acid, Citric acid, Pyruvate, Glycerol, Acetoacetic acid, Acetone, and Hypoxanthine were all higher in backcourt players, whereas Lactate, Alanine, 3-Metyl Histidine, and Methionine were higher in frontcourt players Based on metabolomics, we demonstrate that backcourt and frontcourt players have different metabolic profiles during games, where backcourt players move clearly more during games and therefore rely more on aerobic energy, whereas frontcourt players rely more on anaerobic energy systems in line with less dynamic but more static movement patterns.

Keywords: basketball, metabolomics, saliva, sport loadomics

Procedia PDF Downloads 115
1291 Efficacy of Thrust on Basilar Spheno Synchondrosis in Boxers With Ocular Convergence Deficit. Comparison of Thrust and Therapeutic Exercise: Pilot Experimental Randomized Controlled Trial Study

Authors: Andreas Aceranti, Stefano Costa

Abstract:

The aim of this study was to demonstrate that manipulative treatment combined with therapeutic exercisetherapywas more effective than isolated therapeutic exercise in the short-term treatment of eye convergence disorders in boxers. A randomized controlled trial (RCT) pilot trial was performed at our physiotherapy practices. 30 adult subjects who practice the discipline of boxing were selected after an initial skimming defined by the Convergence Insufficiency Symptom Survey (CISS) test (results greater than or equal to 10) starting from the initial sample of 50 subjects; The 30 recruits were evaluated by an orthoptist using prisms to know the diopters of each eye and were divided into 2 groups (experimental group and control group). The members of the experimental group were subjected to manipulation of the lateral strain of sphenoid from the side contralateral to the eye that had fewer diopters and were subjected to a sequence of 3 ocular motor exercises immediately after manipulation. The control group, on the other hand, received only ocular motor treatment. A secondary outcome was also drawn up that demonstrated how changes in ocular motricity also affected cervical rotation. Analysis of the data showed that the experimental treatment was in the short term superior to the control group to astatistically significant extent both in terms of the prismatic delta of the right eye (0 OT median without manipulation and 10 OT median with manipulation) and that of the left eye (0 OT median without manipulation and 5 OT median with manipulation). Cervical rotation values also showed better values in the experimental group with a median of 4° in the right rotation without manipulation and 6° with thrust; the left rotation presented a median of 2° without manipulation and 7° with thrust. From the results that emerged, the treatment was effective. It would be desirable to increase the sample number and set up a timeline to see if the net improvements obtained in the short term will also be maintained in the medium to long term.

Keywords: boxing, basilar spheno synchondrosis, ocular convergence deficit, osteopathic treatment

Procedia PDF Downloads 88
1290 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 173
1289 Digital Skepticism In A Legal Philosophical Approach

Authors: dr. Bendes Ákos

Abstract:

Digital skepticism, a critical stance towards digital technology and its pervasive influence on society, presents significant challenges when analyzed from a legal philosophical perspective. This abstract aims to explore the intersection of digital skepticism and legal philosophy, emphasizing the implications for justice, rights, and the rule of law in the digital age. Digital skepticism arises from concerns about privacy, security, and the ethical implications of digital technology. It questions the extent to which digital advancements enhance or undermine fundamental human values. Legal philosophy, which interrogates the foundations and purposes of law, provides a framework for examining these concerns critically. One key area where digital skepticism and legal philosophy intersect is in the realm of privacy. Digital technologies, particularly data collection and surveillance mechanisms, pose substantial threats to individual privacy. Legal philosophers must grapple with questions about the limits of state power and the protection of personal autonomy. They must consider how traditional legal principles, such as the right to privacy, can be adapted or reinterpreted in light of new technological realities. Security is another critical concern. Digital skepticism highlights vulnerabilities in cybersecurity and the potential for malicious activities, such as hacking and cybercrime, to disrupt legal systems and societal order. Legal philosophy must address how laws can evolve to protect against these new forms of threats while balancing security with civil liberties. Ethics plays a central role in this discourse. Digital technologies raise ethical dilemmas, such as the development and use of artificial intelligence and machine learning algorithms that may perpetuate biases or make decisions without human oversight. Legal philosophers must evaluate the moral responsibilities of those who design and implement these technologies and consider the implications for justice and fairness. Furthermore, digital skepticism prompts a reevaluation of the concept of the rule of law. In an increasingly digital world, maintaining transparency, accountability, and fairness becomes more complex. Legal philosophers must explore how legal frameworks can ensure that digital technologies serve the public good and do not entrench power imbalances or erode democratic principles. Finally, the intersection of digital skepticism and legal philosophy has practical implications for policy-making. Legal scholars and practitioners must work collaboratively to develop regulations and guidelines that address the challenges posed by digital technology. This includes crafting laws that protect individual rights, ensure security, and promote ethical standards in technology development and deployment. In conclusion, digital skepticism provides a crucial lens for examining the impact of digital technology on law and society. A legal philosophical approach offers valuable insights into how legal systems can adapt to protect fundamental values in the digital age. By addressing privacy, security, ethics, and the rule of law, legal philosophers can help shape a future where digital advancements enhance, rather than undermine, justice and human dignity.

Keywords: legal philosophy, privacy, security, ethics, digital skepticism

Procedia PDF Downloads 43
1288 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 170