Search results for: subject base curriculum
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5325

Search results for: subject base curriculum

45 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 93
44 Modern Technology for Strengthening Concrete Structures Makes Them Resistant to Earthquakes

Authors: Mohsen Abdelrazek Khorshid Ali Selim

Abstract:

Disadvantages and errors of current concrete reinforcement methodsL: Current concrete reinforcement methods are adopted in most parts of the world in their various doctrines and names. They adopt the so-called concrete slab system, where these slabs are semi-independent and isolated from each other and from the surrounding environment of concrete columns or beams, so that the reinforcing steel does not cross from one slab to another or from one slab to adjacent columns. It or the beams surrounding it and vice versa are only a few centimeters and no more. The same applies exactly to the concrete columns that support the building, where the reinforcing steel does not extend from the slabs or beams to the inside of the columns or vice versa except for a few centimeters and no more, just as the reinforcing steel does not extend from inside the column at the top. The ceiling is only a few centimetres, and the same thing is literally repeated in the concrete beams that connect the columns and separate the slabs, where the reinforcing steel does not cross from one beam to another or from one beam to the slabs or columns adjacent to it and vice versa, except for a few centimeters, which makes the basic building elements of columns, slabs and beams They all work in isolation from each other and from the environment surrounding them from all sides. This traditional method of reinforcement may be valid and lasting in geographical areas that are not exposed to earthquakes and earthquakes, where all the loads and tensile forces in the building are constantly directed vertically downward due to gravity and are borne directly by the vertical reinforcement of the building. However, in the case of earthquakes and earthquakes, the loads and tensile forces in the building shift from the vertical direction to the horizontal direction at an angle of inclination that depends on the strength of the earthquake, and most of them are borne by the horizontal reinforcement extending between the basic elements of the building, such as columns, slabs and beams, and since the crossing of the reinforcement between each of the columns, slabs and beams between them And each other, and vice versa, does not exceed several centimeters. In any case, the tensile strength, cohesion and bonding between the various parts of the building are very weak, which causes the buildings to disintegrate and collapse in the horrific manner that we saw in the earthquake in Turkey and Syria in February 2023, which caused the collapse of tens of thousands of buildings in A few seconds later, it left more than 50,000 dead, hundreds of thousands injured, and millions displaced. Description of the new earthquake-resistant model: The idea of the new model in the reinforcement of concrete buildings and constructions is based on the theory that we have formulated as follows: [The tensile strength, cohesion and bonding between the basic parts of the concrete building (columns, beams and slabs) increases as the lengths of the reinforcing steel bars increase and they extend and branch and the different parts of the building share them with each other.] . In other words, the strength, solidity, and cohesion of concrete buildings increase and they become resistant to earthquakes as the lengths of the reinforcing steel bars increase, extend, branch, and share with the various parts of the building, such as columns, beams, and slabs. That is, the reinforcing skewers of the columns must extend in their lengths without cutting to cross from one floor to another until their end. Likewise, the reinforcing skewers of the beams must extend in their lengths without cutting to cross from one beam to another. The ends of these skewers must rest at the bottom of the columns adjacent to the beams. The same thing applies to the reinforcing skewers of the slabs where they must These skewers should be extended in their lengths without cutting to cross from one tile to another, and the ends of these skewers should rest either under the adjacent columns or inside the beams adjacent to the slabs as follows: First, reinforce the columns: The columns have the lion's share of the reinforcing steel in this model in terms of type and quantity, as the columns contain two types of reinforcing bars. The first type is large-diameter bars that emerge from the base of the building, which are the nerves of the column. These bars must extend over their normal length of 12 meters or more and extend to a height of three floors, if desired. In raising other floors, bars with the same diameter and the same length are added to the top after the second floor. The second type is bars with a smaller diameter, and they are the same ones that are used to reinforce beams and slabs, so that the bars that reinforce the beams and slabs facing each column are bent down inside this column and along the entire length of the column. This requires an order. Most engineers do not prefer it, which is to pour the entire columns and pour the roof at once, but we prefer this method because it enables us to extend the reinforcing bars of both the beams and slabs to the bottom of the columns so that the entire building becomes one concrete block that is cohesive and resistant to earthquakes. Secondly, arming the cameras: The beams' reinforcing skewers must also extend to a full length of 12 meters or more without cutting. The ends of the skewers are bent and dropped inside the column at the beginning of the beam to its bottom. Then the skewers are extended inside the beam so that their other end falls under the facing column at the end of the beam. The skewers may cross over the head of a column. Another passes through another adjacent beam and rests at the bottom of a third column, according to the lengths of each of the skewers and beams. Third, reinforcement of slabs: The slab reinforcing skewers must also extend their entire length, 12 meters or more, without cutting, distinguishing between two cases. The first case is the skewers opposite the columns, and their ends are dropped inside one of the columns. Then the skewers cross inside the adjacent slab and their other end falls below the opposite column. The skewers may cross over The head of the adjacent column passes through another adjacent slab and rests at the bottom of a third column, according to the dimensions of the slabs and the lengths of the skewers. The second case is the skewers opposite the beams, and their ends must be bent in the form of a square or rectangle according to the dimensions of the beam’s width and height, and this square or rectangle is dropped inside the beam at the beginning of the slab, and it serves as The skewers are for the beams, then the skewers are extended along the length of the slab, and at the end of the slab, the skewers are bent down to the bottom of the adjacent beam in the shape of the letter U, after which the skewers are extended inside the adjacent slab, and this is repeated in the same way inside the other adjacent beams until the end of the skewer, then it is bent downward in the form of a square or rectangle inside the beam, as happened. In its beginning.

Keywords: earthquake resistant buildings, earthquake resistant concrete constructions, new technology for reinforcement of concrete buildings, new technology in concrete reinforcement

Procedia PDF Downloads 40
43 Evaluation of the Biological Activity of New Antimicrobial and Biodegradable Textile Materials for Protective Equipment

Authors: Safa Ladhari, Alireza Saidi, Phuong Nguyen-Tri

Abstract:

During health crises, such as COVID-19, using disposable protective equipment (PEs) (masks, gowns, etc.) causes long-term problems, increasing the volume of hazardous waste that must be handled safely and expensively. Therefore, producing textiles for antimicrobial and reusable materials is highly desirable to decrease the use of disposable PEs that should be treated as hazardous waste. In addition, if these items are used regularly in the workplace or for daily activities by the public, they will most likely end up in household waste. Furthermore, they may pose a high risk of contagion to waste collection workers if contaminated. Therefore, to protect the whole population in times of sanitary crisis, it is necessary to equip these materials with tools that make them resilient to the challenges of carrying out daily activities without compromising public health and the environment and without depending on them external technologies and producers. In addition, the materials frequently used for EPs are plastics of petrochemical origin. The subject of the present work is replacing petroplastics with bioplastic since it offers better biodegradability. The chosen polymer is polyhydroxybutyrate (PHB), a family of polyhydroxyalkanoates synthesized by different bacteria. It has similar properties to conventional plastics. However, it is renewable, biocompatible, and has attractive barrier properties compared to other polyesters. These characteristics make it ideal for EP protection applications. The current research topic focuses on the preparation and rapid evaluation of the biological activity of nanotechnology-based antimicrobial agents to treat textile surfaces used for PE. This work will be carried out to provide antibacterial solutions that can be transferred to a workplace application in the fight against short-term biological risks. Three main objectives are proposed during this research topic: 1) the development of suitable methods for the deposition of antibacterial agents on the surface of textiles; 2) the development of a method for measuring the antibacterial activity of the prepared textiles and 3) the study of the biodegradability of the prepared textiles. The studied textile is a non-woven fabric based on a biodegradable polymer manufactured by the electrospinning method. Indeed, nanofibers are increasingly studied due to their unique characteristics, such as high surface-to-volume ratio, improved thermal, mechanical, and electrical properties, and confinement effects. The electrospun film will be surface modified by plasma treatment and then loaded with hybrid antibacterial silver and titanium dioxide nanoparticles by the dip-coating method. This work uses simple methods with emerging technologies to fabricate nanofibers with suitable size and morphology to be used as components for protective equipment. The antibacterial agents generally used are based on silver, zinc, copper, etc. However, to our knowledge, few researchers have used hybrid nanoparticles to ensure antibacterial activity with biodegradable polymers. Also, we will exploit visible light to improve the antibacterial effectiveness of the fabric, which differs from the traditional contact mode of killing bacteria and presents an innovation of active protective equipment. Finally, this work will allow for the innovation of new antibacterial textile materials through a simple and ecological method.

Keywords: protective equipment, antibacterial textile materials, biodegradable polymer, electrospinning, hybrid antibacterial nanoparticles

Procedia PDF Downloads 54
42 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 222
41 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes

Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi

Abstract:

The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.

Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees

Procedia PDF Downloads 117
40 Health and Climate Changes: "Ippocrate" a New Alert System to Monitor and Identify High Risk

Authors: A. Calabrese, V. F. Uricchio, D. di Noia, S. Favale, C. Caiati, G. P. Maggi, G. Donvito, D. Diacono, S. Tangaro, A. Italiano, E. Riezzo, M. Zippitelli, M. Toriello, E. Celiberti, D. Festa, A. Colaianni

Abstract:

Climate change has a severe impact on human health. There is a vast literature demonstrating temperature increase is causally related to cardiovascular problem and represents a high risk for human health, but there are not study that improve a solution. In this work, it is studied how the clime influenced the human parameter through the analysis of climatic conditions in an area of the Apulia Region: Capurso Municipality. At the same time, medical personnel involved identified a set of variables useful to define an index describing health condition. These scientific studies are the base of an innovative alert system, IPPOCRATE, whose aim is to asses climate risk and share information to population at risk to support prevention and mitigation actions. IPPOCRATE is an e-health system, it is designed to provide technological support to analysis of health risk related to climate and provide tools for prevention and management of critical events. It is the first integrated system of prevention of human risk caused by climate change. IPPOCRATE calculates risk weighting meteorological data with the vulnerability of monitored subjects and uses mobile and cloud technologies to acquire and share information on different data channels. It is composed of four components: Multichannel Hub. Multichannel Hub is the ICT infrastructure used to feed IPPOCRATE cloud with a different type of data coming from remote monitoring devices, or imported from meteorological databases. Such data are ingested, transformed and elaborated in order to be dispatched towards mobile app and VoIP phone systems. IPPOCRATE Multichannel Hub uses open communication protocols to create a set of APIs useful to interface IPPOCRATE with 3rd party applications. Internally, it uses non-relational paradigm to create flexible and highly scalable database. WeHeart and Smart Application The wearable device WeHeart is equipped with sensors designed to measure following biometric variables: heart rate, systolic blood pressure and diastolic blood pressure, blood oxygen saturation, body temperature and blood glucose for diabetic subjects. WeHeart is designed to be easy of use and non-invasive. For data acquisition, users need only to wear it and connect it to Smart Application by Bluetooth protocol. Easy Box was designed to take advantage from new technologies related to e-health care. EasyBox allows user to fully exploit all IPPOCRATE features. Its name, Easy Box, reveals its purpose of container for various devices that may be included depending on user needs. Territorial Registry is the IPPOCRATE web module reserved to medical personnel for monitoring, research and analysis activities. Territorial Registry allows to access to all information gathered by IPPOCRATE using GIS system in order to execute spatial analysis combining geographical data (climatological information and monitored data) with information regarding the clinical history of users and their personal details. Territorial Registry was designed for different type of users: control rooms managed by wide area health facilities, single health care center or single doctor. Territorial registry manages such hierarchy diversifying the access to system functionalities. IPPOCRATE is the first e-Health system focused on climate risk prevention.

Keywords: climate change, health risk, new technological system

Procedia PDF Downloads 838
39 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany

Authors: Karin Schakib-Ekbatan, Annette Roser

Abstract:

According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.

Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings

Procedia PDF Downloads 104
38 Developing the Collaboration Model of Physical Education and Sport Sciences Faculties with Service Section of Sport Industrial

Authors: Vahid Saatchian, Seyyed Farideh Hadavi

Abstract:

The main aim of this study was developing the collaboration model of physical education and sport sciences faculties with service section of sport industrial.The research methods of this study was a qualitative. So researcher with of identifying the priority list of collaboration between colleges and service section of sport industry and according to sampling based of subjective and snowball approach, conducted deep interviews with 22 elites that study around the field of research topic. indeed interviews were analyzed through qualitative coding (open, axial and selective) with 5 category such as causal condition, basic condition, intervening conditions, action/ interaction and strategy. Findings exposed that in causal condition 10 labels appeared. So because of heterogeneity of labes, researcher categorized in total subject. In basic condition 59 labels in open coding identified this categorized in 14 general concepts. Furthermore with composition of the declared category and relationship between them, 5 final and internal categories (culture, intelligence, marketing, environment and ultra-powers) were appeared. Also an intervening condition in the study includes 5 overall scopes of social factors, economic, cultural factors, and the management of the legal and political factors that totally named macro environment. Indeed for identifying strategies, 8 areas that covered with internal and external challenges relationship management were appeared. These are including, understanding, outside awareness, manpower, culture, integrated management, the rules and regulations and marketing. Findings exposed 8 labels in open coding which covered the internal and external of challenges of relation management of two sides and these concepts were knowledge and awareness, external view, human source, madding organizational culture, parties’ thoughts, unit responsible for/integrated management, laws and regulations and marketing. Eventually the consequences categorized in line of strategies and were at scope of the cultural development, general development, educational development, scientific development, under development, international development, social development, economic development, technology development and political development that consistent with strategies. The research findings could help the sport managers witch use to scientific collaboration management and the consequences of this in those sport institutions. Finally, the consequences that identified as a result of the devopmental strategies include: cultural, governmental, educational, scientific, infrastructure, international, social, economic, technological and political that is largely consistent with strategies. With regard to the above results, enduring and systematic relation with long term cooperation between the two sides requires strategic planning were based on cooperation of all stakeholders. Through this, in the turbulent constantly changing current sustainable environment, competitive advantage for university and industry obtained. No doubt that lack of vision and strategic thinking for cooperation in the planning of the university and industry from its capability and instead of using the opportunity, lead the opportunities to problems.

Keywords: university and industry collaboration, sport industry, physical education and sport science college, service section of sport industry

Procedia PDF Downloads 362
37 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 105
36 The Politics of Health Education: A Cultural Analysis of Tobacco Control Communication in India

Authors: Ajay Ivan

Abstract:

This paper focuses on the cultural politics of health-promotional and disease-preventive pedagogic practices in the context of the national tobacco control programme in India. Tobacco consumption is typically problematised as a paradox: tobacco poses objective health risks such as cancer and heart disease, but its production, sale and export contribute significantly to state revenue. A blanket ban on tobacco products, therefore, is infeasible though desirable. Instead, initiatives against tobacco use have prioritised awareness creation and behaviour change to reduce its demand. This paper argues that public health communication is not, as commonly assumed, an apolitical and neutral transmission of disease-preventive information. Drawing on Michel Foucault’s concept of governmentality, it examines such campaigns as techniques of disciplining people rather than coercing them to give up tobacco use, which would be both impractical and counter-productive. At the level of the population, these programmes constitute a security mechanism that reduces risks without eliminating them, so as to ensure an optimal level of public health without hampering the economy. Anti-tobacco pedagogy thus aligns with a contemporary paradigm of health that emphasises risk-assessment and lifestyle management as tools of governance, using pedagogic techniques to teach people how to be healthy. The paper analyses the pictorial health warnings on tobacco packets and anti-tobacco advertisements in movie theatres mandated by the state, along with awareness-creation messages circulated by anti-tobacco advocacy groups in India, to show how they discursively construct tobacco and its consumption as a health risk. Smoking is resignified from a pleasurable and sociable practice to a deadly addiction that jeopardises the health of those who smoke and those who passively inhale the smoke. While disseminating information about the health risks of tobacco, these initiatives employ emotional and affective techniques of persuasion to discipline tobacco users. They incite fear of death and of social ostracism to motivate behaviour change, complementing their appeals to reason. Tobacco is portrayed as a grave moral danger to the family and a detriment to the vitality of the nation, such that using it contradicts one’s duties as a parent or citizen. Awareness programmes reproduce prevailing societal assumptions about health and disease, normalcy and deviance, and proper and improper conduct. Pedagogy thus functions as an apparatus of public health governance, recruiting subjects as volunteers in their own regulation and aligning their personal goals and aspirations to the objectives of tobacco control. The paper links this calculated management of subjectivity and the self-responsibilisation of the pedagogic subject to a distinct mode of neoliberal civic governance in contemporary India. Health features prominently in this mode of governance that serves the biopolitical obligation of the state as laid down in Article 39 of the Constitution, which includes a duty to ensure the health of its citizens. Insofar as the health of individuals is concerned, the problem is how to balance this duty of the state with the fundamental right of the citizen to choose how to live. Public health pedagogy, by directing the citizen’s ‘free’ choice without unduly infringing upon it, offers a tactical solution.

Keywords: public health communication, pedagogic power, tobacco control, neoliberal governance

Procedia PDF Downloads 58
35 Extension of Moral Agency to Artificial Agents

Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney

Abstract:

Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfully

Keywords: artificial agency, correctional system, ethics, natural agency, responsibility

Procedia PDF Downloads 159
34 Ensemble Sampler For Infinite-Dimensional Inverse Problems

Authors: Jeremie Coullon, Robert J. Webber

Abstract:

We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.

Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction

Procedia PDF Downloads 131
33 A Chemical Perspective to Nineteenth-Century Female Medical Pioneers: Utilizing Mass Spectrometry in the Museum Space

Authors: Elizabeth R. LaFave, Grayson Sink, Anna Vassallo, Samantha Mills, Eli G. Hvastkovs

Abstract:

Throughout history and into modern times, the continuation of male influence over female healthcare has created inadequacies in availability and access to treatments, often further limited in rural communities. The historical plight of women in healthcare can be understood by studying the advancements made by women in the field, both through their career arcs and by delving into the treatments they offer. An early example is the case of Martha Ballard (1735-1812), a midwife in New York who practiced when female practitioners were dismissed in favor of less educated male physicians, which was a well-accepted practice into the twentieth century. In order to overcome these setbacks, a strategy used by some female practitioners was to develop and market their own remedies in an attempt to better serve female patients. By highlighting the compromises and social manipulation of female entrepreneurs, in comparison with the medicines they developed and used, we can map their ability to carve a specific niche for themselves and their targeted customers. The application of modern chemical approaches in a historical context serves to enhance a variety of perspectives within the museum sphere necessary for the comprehension and understanding of the female plight in both medical care and service. In order to further examine the overall bias and scrutiny for women in the medical field, specifically those undertaking entrepreneurial roles, examples of alternative remedies from female founders will be analyzed utilizing these approaches. Modern analytical chemistry techniques, specifically mass spectrometry (MS), have been successful in offering compositional analyses for both labeled and unlabeled ingredients in old medicines. Previously, we have analyzed two forms of alternative treatment options created by male medical professionals to address lingering historical questions of purity and validity. Although primarily sugar based, both Humphreys’ Specifics and Boericke & Tafel remedies also contained unique ingredients, albeit in small quantities, with medicinal properties. Here, we applied the same methodology to study another highly politicized 19th-century debate surrounding the contribution and role of women in the medical profession through analyzing three remedies, each from a different female-led manufacturing company; Mrs. Joe Persons, Lydia Pinkham, and Winslow’s Syrups. Following MS analyses for both labeled and unlabeled ingredients, both Winslow’s and Pinkham’s remedies were similar to their male counterparts in advertisement strategy, targeted customer base, and overall composition of remedy (primarily sugar-based with small amounts of unique ingredients). In effect, these unbiased chemical assessments are used to dissect the rationality of both market and physician criticism for each individual manufacturer through assessment of authenticity, benefaction, and comparison among female entrepreneurs and their aims to enter the medical community (i.e., geographic location, market size). Our work aims to increase collaboration between STEM (Science, Technology, Engineering, Mathematics)-based fields and historical museum studies on a larger scale while also answering questions of potential bias towards females in the medical community as means of comparison to their male counterparts and in-depth historical analyses to unravel individual strategies to overcome the setback.

Keywords: nineteenth-century medicine, alternative remedies, female healthcare, chemical analyses, mass spectrometry

Procedia PDF Downloads 62
32 Accumulation of Trace Metals in Leaf Vegetables Cultivated in High Traffic Areas in Ghent, Belgium

Authors: Veronique Troch, Wouter Van der Borght, Véronique De Bleeker, Bram Marynissen, Nathan Van der Eecken, Gijs Du Laing

Abstract:

Among the challenges associated with increased urban food production are health risks from food contamination, due to the higher pollution loads in urban areas, compared to rural sites. Therefore, the risks posed by industrial or traffic pollution of locally grown food, was defined as one of five high-priority issues of urban agriculture requiring further investigation. The impact of air pollution on urban horticulture is the subject of this study. More particular, this study focuses on the atmospheric deposition of trace metals on leaf vegetables cultivated in the city of Ghent, Belgium. Ghent is a particularly interesting study site as it actively promotes urban agriculture. Plants accumulate heavy metals by absorption from contaminated soils and through deposition on parts exposed to polluted air. Accumulation of trace metals in vegetation grown near roads has been shown to be significantly higher than those grown in rural areas due to traffic-related contaminants in the air. Studies of vegetables demonstrated, that the uptake and accumulation of trace metals differed among crop type, species, and among plant parts. Studies on vegetables and fruit trees in Berlin, Germany, revealed significant differences in trace metal concentrations depending on local traffic, crop species, planting style and parameters related to barriers between sampling site and neighboring roads. This study aims to supplement this scarce research on heavy metal accumulation in urban horticulture. Samples from leaf vegetables were collected from different sites, including allotment gardens, in Ghent. Trace metal contents on these leaf vegetables were analyzed by ICP-MS (inductively coupled plasma mass spectrometry). In addition, precipitation on each sampling site was collected by NILU-type bulk collectors and similarly analyzed for trace metals. On one sampling site, different parameters which might influence trace metal content in leaf vegetables were analyzed in detail. These parameters are distance of planting site to the nearest road, barriers between planting site and nearest road, and type of leaf vegetable. For comparison, a rural site, located farther from city traffic and industrial pollution, was included in this study. Preliminary results show that there is a high correlation between trace metal content in the atmospheric deposition and trace metal content in leaf vegetables. Moreover, a significant higher Pb, Cu and Fe concentration was found on spinach collected from Ghent, compared to spinach collected from a rural site. The distance of planting site to the nearest road significantly affected the accumulation of Pb, Cu, Mo and Fe on spinach. Concentrations of those elements on spinach increased with decreasing distance between planting site and the nearest road. Preliminary results did not show a significant effect of barriers between planting site and the nearest road on accumulation of trace metals on leaf vegetables. The overall goal of this study is to complete and refine existing guidelines for urban gardening to exclude potential health risks from food contamination. Accordingly, this information can help city governments and civil society in the professionalization and sustainable development of urban agriculture.

Keywords: atmospheric deposition, leaf vegetables, trace metals, traffic pollution, urban agriculture

Procedia PDF Downloads 212
31 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 320
30 Older Consumer’s Willingness to Trust Social Media Advertising: An Australian Case

Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant

Abstract:

Social media networks have become the hotbed for advertising activities, due mainly to their increasing consumer/user base, and secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel-specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. The purpose of this exploratory paper is to investigate the extent to which social media users trust social media advertising. Understanding this relationship will fundamentally assist marketers in better understanding social media interactions and their implications for society. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional different media, such as broadcast media and print media, and more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilised as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: Gen Z/Millennials Reliability = 4.90/7 vs Gen X/Boomers Reliability = 4.34/7; Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads, when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioural intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users, in an attempt to foster positive behavioural responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.

Keywords: social media advertising, trust, older consumers, online

Procedia PDF Downloads 59
29 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis

Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon

Abstract:

Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.

Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles

Procedia PDF Downloads 363
28 Parallel Opportunity for Water Conservation and Habitat Formation on Regulated Streams through Formation of Thermal Stratification in River Pools

Authors: Todd H. Buxton, Yong G. Lai

Abstract:

Temperature management in regulated rivers can involve significant expenditures of water to meet the cold-water requirements of species in summer. For this purpose, flows released from Lewiston Dam on the Trinity River in Northern California are 12.7 cms with temperatures around 11oC in July through September to provide adult spring Chinook cold water to hold in deep pools and mature until spawning in fall. The releases are more than double the flow and 10oC colder temperatures than the natural conditions before the dam was built. The high, cold releases provide springers the habitat they require but may suppress the stream food base and limit future populations of salmon by reducing the juvenile fish size and survival to adults via the positive relationship between the two. Field and modeling research was undertaken to explore whether lowering summer releases from Lewiston Dam may promote thermal stratification in river pools so that both the cold-water needs of adult salmon and warmer water requirements of other organisms in the stream biome may be met. For this investigation, a three-dimensional (3D) computational fluid dynamics (CFD) model was developed and validated with field measurements in two deep pools on the Trinity River. Modeling and field observations were then used to identify the flows and temperatures that may form and maintain thermal stratification under different meteorologic conditions. Under low flows, a pool was found to be well mixed and thermally homogenous until temperatures began to stratify shortly after sunrise. Stratification then strengthened through the day until shading from trees and mountains cooled the inlet flow and decayed the thermal gradient, which collapsed shortly before sunset and returned the pool to a well-mixed state. This diurnal process of stratification formation and destruction was closely predicted by the 3D CFD model. Both the model and field observations indicate that thermal stratification maintained the coldest temperatures of the day at ≥2m depth in a pool and provided water that was around 8oC warmer in the upper 2m of the pool. Results further indicate that the stratified pool under low flows provided almost the same daily average temperatures as when flows were an order of magnitude higher and stratification was prevented, indicating significant water savings may be realized in regulated streams while also providing a diversity in water temperatures the ecosystem requires. With confidence in the 3D CFD model, the model is now being applied to a dozen pools in the Trinity River to understand how pool bathymetry influences thermal stratification under variable flows and diurnal temperature variations. This knowledge will be used to expand the results to 52 pools in a 64 km reach below Lewiston Dam that meet the depth criteria (≥2 m) for spring Chinook holding. From this, rating curves will be developed to relate discharge to the volume of pool habitat that provides springers the temperature (<15.6oC daily average), velocity (0.15 to 0.4 m/s) and depths that accommodate the escapement target for spring Chinook (6,000 adults) under maximum fish densities measured in other streams (3.1 m3/fish) during the holding time of year (May through August). Flow releases that meet these goals will be evaluated for water savings relative to the current flow regime and their influence on indicator species, including the Foothill Yellow-Legged Frog, and aspects of the stream biome that support salmon populations, including macroinvertebrate production and juvenile Chinook growth rates.

Keywords: 3D CFD modeling, flow regulation, thermal stratification, chinook salmon, foothill yellow-legged frogs, water managment

Procedia PDF Downloads 39
27 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks

Authors: Andrew D. Henshaw, James M. Austin

Abstract:

Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.

Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money

Procedia PDF Downloads 54
26 Leveraging Information for Building Supply Chain Competitiveness

Authors: Deepika Joshi

Abstract:

Operations in automotive industry rely greatly on information shared between Supply Chain (SC) partners. This leads to efficient and effective management of SC activity. Automotive sector in India is growing at 14.2 percent per annum and has huge economic importance. We find that no study has been carried out on the role of information sharing in SC management of Indian automotive manufacturers. Considering this research gap, the present study is planned to establish the significance of information sharing in Indian auto-component supply chain activity. An empirical research was conducted for large scale auto component manufacturers from India. Twenty four Supply Chain Performance Indicators (SCPIs) were collected from existing literature. These elements belong to eight diverse but internally related areas of SC management viz., demand management, cost, technology, delivery, quality, flexibility, buyer-supplier relationship, and operational factors. A pair-wise comparison and an open ended questionnaire were designed using these twenty four SCPIs. The questionnaire was then administered among managerial level employees of twenty-five auto-component manufacturing firms. Analytic Network Process (ANP) technique was used to analyze the response of pair-wise questionnaire. Finally, twenty-five priority indexes are developed, one for each respondent. These were averaged to generate an industry specific priority index. The open-ended questions depicted strategies related to information sharing between buyers and suppliers and their influence on supply chain performance. Results show that the impact of information sharing on certain performance indicators is relatively greater than their corresponding variables. For example, flexibility, delivery, demand and cost related elements have massive impact on information sharing. Technology is relatively less influenced by information sharing but it immensely influence the quality of information shared. Responses obtained from managers reveal that timely and accurate information sharing lowers the cost, increases flexibility and on-time delivery of auto parts, therefore, enhancing the competitiveness of Indian automotive industry. Any flaw in dissemination of information can disturb the cycle time of both the parties and thus increases the opportunity cost. Due to supplier’s involvement in decisions related to design of auto parts, quality conformance is found to improve, leading to reduction in rejection rate. Similarly, mutual commitment to share right information at right time between all levels of SC enhances trust level. SC partners share information to perform comprehensive quality planning to ingrain total quality management. This study contributes to operations management literature which faces scarcity of empirical examination on this subject. It views information sharing as a building block which firms can promote and evolve to leverage the operational capability of all SC members. It will provide insights for Indian managers and researchers as every market is unique and suppliers and buyers are driven by local laws, industry status and future vision. While major emphasis in this paper is given to SC operations happening between domestic partners, placing more focus on international SC can bring in distinguished results.

Keywords: Indian auto component industry, information sharing, operations management, supply chain performance indicators

Procedia PDF Downloads 527
25 Recrystallization Behavior and Microstructural Evolution of Nickel Base Superalloy AD730 Billet during Hot Forging at Subsolvus Temperatures

Authors: Marcos Perez, Christian Dumont, Olivier Nodin, Sebastien Nouveau

Abstract:

Nickel superalloys are used to manufacture high-temperature rotary engine parts such as high-pressure disks in gas turbine engines. High strength at high operating temperatures is required due to the levels of stress and heat the disk must withstand. Therefore it is necessary parts made from materials that can maintain mechanical strength at high temperatures whilst remain comparatively low in cost. A manufacturing process referred to as the triple melt process has made the production of cast and wrought (C&W) nickel superalloys possible. This means that the balance of cost and performance at high temperature may be optimized. AD730TM is a newly developed Ni-based superalloy for turbine disk applications, with reported superior service properties around 700°C when compared to Inconel 718 and several other alloys. The cast ingot is converted into billet during either cogging process or open die forging. The semi-finished billet is then further processed into its final geometry by forging, heat treating, and machining. Conventional ingot-to-billet conversion is an expensive and complex operation, requiring a significant amount of steps to break up the coarse as-cast structure and interdendritic regions. Due to the size of conventional ingots, it is difficult to achieve a uniformly high level of strain for recrystallization, resulting in non-recrystallized regions that retain large unrecrystallized grains. Non-uniform grain distributions will also affect the ultrasonic inspectability response, which is used to find defects in the final component. The main aim is to analyze the recrystallization behavior and microstructural evolution of AD730 at subsolvus temperatures from a semi-finished product (billet) under conditions representative of both cogging and hot forging operations. Special attention to the presence of large unrecrystallized grains was paid. Double truncated cones (DTCs) were hot forged at subsolvus temperatures in hydraulic press, followed by air cooling. SEM and EBSD analysis were conducted in the as-received (billet) and the as-forged conditions. AD730 from billet alloy presents a complex microstructure characterized by a mixture of several constituents. Large unrecrystallized grains present a substructure characterized by large misorientation gradients with the formation of medium to high angle boundaries in their interior, especially close to the grain boundaries, denoting inhomogeneous strain distribution. A fine distribution of intragranular precipitates was found in their interior, playing a key role on strain distribution and subsequent recrystallization behaviour during hot forging. Continuous dynamic recrystallization (CDRX) mechanism was found to be operating in the large unrecrystallized grains, promoting the formation intragranular DRX grains and the gradual recrystallization of these grains. Evidences that hetero-epitaxial recrystallization mechanism is operating in AD730 billet material were found. Coherent γ-shells around primary γ’ precipitates were found. However, no significant contribution to the overall recrystallization during hot forging was found. By contrast, strain presents the strongest effect on the microstructural evolution of AD730, increasing the recrystallization fraction and refining the structure. Regions with low level of deformation (ε ≤ 0.6) were translated into large fractions of unrecrystallized structures (strain accumulation). The presence of undissolved secondary γ’ precipitates (pinning effect), prior to hot forging operations, could explain these results.

Keywords: AD730 alloy, continuous dynamic recrystallization, hot forging, γ’ precipitates

Procedia PDF Downloads 177
24 Virtual Reference Service as a Space for Communication and Interaction: Providing Infrastructure for Learning in Times of Crisis at Uppsala University

Authors: Nadja Ylvestedt

Abstract:

Uppsala University Library is a geographically dispersed research library consisting of nine subject libraries located in different campus areas throughout the city of Uppsala. Despite the geographical dispersion, it is the library's ambition to be perceived as a cohesive library with consistently high service and quality. A key factor to being one cohesive library is the library's online services, especially the virtual reference service. E-mail, chat and phone are answered by a team of specially trained staff under the supervision of a team leader. When covid-19 hit, well-established routines and processes to provide an infrastructure for students and researchers at the university changed radically. The strong connection between services provided at the library locations as well as at the VRS has been one of the key components of the library’s success in providing patrons with the help they need. With radically minimized availability at the physical locations, the infrastructure was at risk of collapsing. Objectives:- The objective of this project has been to evaluate the consequences of the sudden change in the organization of the library. The focus of this evaluation is the library’s VRS as an important space for learning, interaction and communication between the library and the community when other traditional spaces were not available. The goal of this evaluation is to capture the lessons learned from providing infrastructure for learning and research in times of crisis both on a practical, user-centered level but also to stress the importance of leadership in ever-changing environments that supports and creates agile, flexible services and teams instead of rigid processes adhering to obsolete goals. Results:- Reduced availability at the physical library locations was one of the strategies to prevent the spread of the covid-19 virus. The library staff was encouraged to work from home, so student workers staffed the library’s physical locations during that time, leaving the VRS to be the only place where patrons could get expert help. The VRS had an increase of 65% of questions asked between spring term 2019 and spring term 2020. The VRS team had to navigate often complicated and fast-changing new routines depending on national guidelines. The VRS team has a strong emphasis on agility in their approach to the challenges and opportunities, with methods to evaluate decisions regularly with user experience in mind. Fast decision-making, collecting feedback, an open-minded approach to reviewing rules and processes with both a short-term and a long-term focus and providing a healthy work environment have been key factors in managing this crisis and learn from it. This was resting on a strong sense of ownership regarding the VRS, well-working communication tools and agile and active communication between team members, as well as between the team and the rest of the organization who served as a second-line support system to aid the VRS team. Moving forward, the VRS has become an important space for communication, interaction and provider of infrastructure, implementing new routines and more extensive availability due to the lessons learned during crisis. The evaluation shows that the virtual environment has become an important addition to the physical spaces, existing in its own right but always in connection with and in relationship with the library structure as a whole. Thereby showing that the basis of human interaction stays the same while its form morphs and adapts to changes, thus leaving the virtual environment as a space of communication and infrastructure with unique opportunities for outreach and the potential to become a staple in patron’s education and learning.

Keywords: virtual reference service, leadership, digital infrastructure, research library

Procedia PDF Downloads 147
23 Translating the Australian National Health and Medical Research Council Obesity Guidelines into Practice into a Rural/Regional Setting in Tasmania, Australia

Authors: Giuliana Murfet, Heidi Behrens

Abstract:

Chronic disease is Australia’s biggest health concern and obesity the leading risk factor for many. Obesity and chronic disease have a higher representation in rural Tasmania, where levels of socio-disadvantage are also higher. People living outside major cities have less access to health services and poorer health outcomes. To help primary healthcare professionals manage obesity, the Australian NHMRC evidence-based clinical practice guidelines for management of overweight and obesity in adults were developed. They include recommendations for practice and models for obesity management. To our knowledge there has been no research conducted that investigates translation of these guidelines into practice in rural-regional areas; where implementation can be complicated by limited financial and staffing resources. Also, the systematic review that informed the guidelines revealed a lack of evidence for chronic disease models of obesity care. The aim was to establish and evaluate a multidisciplinary model for obesity management in a group of adult people with type 2 diabetes in a dispersed rural population in Australia. Extensive stakeholder engagement was undertaken to both garner support for an obesity clinic and develop a sustainable model of care. A comprehensive nurse practitioner-led outpatient model for obesity care was designed. Multidisciplinary obesity clinics for adults with type 2 diabetes including a dietitian, psychologist, physiotherapist and nurse practitioner were set up in the north-west of Tasmania at two geographically-rural towns. Implementation was underpinned by the NHMRC guidelines and recommendations focused on: assessment approaches; promotion of health benefits of weight loss; identification of relevant programs for individualising care; medication and bariatric surgery options for obesity management; and, the importance of long-term weight management. A clinical pathway for adult weight management is delivered by the multidisciplinary team with recognition of the impact of and adjustments needed for other comorbidities. The model allowed for intensification of intervention such as bariatric surgery according to recommendations, patient desires and suitability. A randomised controlled trial is ongoing, with the aim to evaluate standard care (diabetes-focused management) compared with an obesity-related approach with additional dietetic, physiotherapy, psychology and lifestyle advice. Key barriers and enablers to guideline implementation were identified that fall under the following themes: 1) health care delivery changes and the project framework development; 2) capacity and team-building; 3) stakeholder engagement; and, 4) the research project and partnerships. Engagement of not only local hospital but also state-wide health executives and surgical services committee were paramount to the success of the project. Staff training and collective development of the framework allowed for shared understanding. Staff capacity was increased with most taking on other activities (e.g., surgery coordination). Barriers were often related to differences of opinions in focus of the project; a desire to remain evidenced based (e.g., exercise prescription) without adjusting the model to allow for consideration of comorbidities. While barriers did exist and challenges overcome; the development of critical partnerships did enable the capacity for a potential model of obesity care for rural regional areas. Importantly, the findings contribute to the evidence base for models of diabetes and obesity care that coordinate limited resources.

Keywords: diabetes, interdisciplinary, model of care, obesity, rural regional

Procedia PDF Downloads 207
22 Post Liberal Perspective on Minorities Visibility in Contemporary Visual Culture: The Case of Mizrahi Jews

Authors: Merav Alush Levron, Sivan Rajuan Shtang

Abstract:

From as early as their emergence in Europe and the US, postmodern and post-colonial paradigm have formed the backbone of the visual culture field of study. The self-representation project of political minorities is studied, described and explained within the premises and perspectives drawn from these paradigms, addressing the key issues they had raised: modernism’s crisis of representation. The struggle for self-representation, agency and multicultural visibility sought to challenge the liberal pretense of universality and equality, hitting at its different blind spots, on issues such as class, gender, race, sex, and nationality. This struggle yielded subversive identity and hybrid performances, including reclaiming, mimicry and masquerading. These performances sought to defy the uniform, universal self, which forms the basis for the liberal, rational, enlightened subject. The argument of this research runs that this politics of representation itself is confined within liberal thought. Alongside post-colonialism and multiculturalism’s contribution in undermining oppressive structures of power, generating diversity in cultural visibility, and exposing the failure of liberal colorblindness, this subversion is constituted in the visual field by way of confrontation, flying in the face of the universal law and relying on its ongoing comparison and attribution to this law. Relying on Deleuze and Guattari, this research set out to draw theoretic and empiric attention to an alternative, post-liberal occurrence which has been taking place in the visual field in parallel to the contra-hegemonic phase and as a product of political reality in the aftermath of the crisis of representation. It is no longer a counter-representation; rather, it is a motion of organic minor desire, progressing in the form of flows and generating what Deleuze and Guattari termed deterritorialization of social structures. This discussion shall have its focus on current post-liberal performances of ‘Mizrahim’ (Jewish Israelis of Arab and Muslim extraction) in the visual field in Israel. In television, video art and photography, these performances challenge the issue of representation and generate concrete peripheral Mizrahiness, realized in the visual organization of the photographic frame. Mizrahiness then transforms from ‘confrontational’ representation into a 'presence', flooding the visual sphere in our plain sight, in a process of 'becoming'. The Mizrahi desire is exerted on the plains of sound, spoken language, the body and the space where they appear. It removes from these plains the coding and stratification engendered by European dominance and rational, liberal enlightenment. This stratification, adhering to the hegemonic surface, is flooded not by way of resisting false consciousness or employing hybridity, but by way of the Mizrahi identity’s own productive, material immanent yearning. The Mizrahi desire reverberates with Mizrahi peripheral 'worlds of meaning', where post-colonial interpretation almost invariably identifies a product of internalized oppression, and a recurrence thereof, rather than a source in itself - an ‘offshoot, never a wellspring’, as Nissim Mizrachi clarifies in his recent pioneering work. The peripheral Mizrahi performance ‘unhook itself’, in Deleuze and Guattari words, from the point of subjectification and interpretation and does not correspond with the partialness, absence, and split that mark post-colonial identities.

Keywords: desire, minority, Mizrahi Jews, post-colonialism, post-liberalism, visibility, Deleuze and Guattari

Procedia PDF Downloads 300
21 Anajaa-Visual Substitution System: A Navigation Assistive Device for the Visually Impaired

Authors: Juan Pablo Botero Torres, Alba Avila, Luis Felipe Giraldo

Abstract:

Independent navigation and mobility through unknown spaces pose a challenge for the autonomy of visually impaired people (VIP), who have relied on the use of traditional assistive tools like the white cane and trained dogs. However, emerging visually assistive technologies (VAT) have proposed several human-machine interfaces (HMIs) that could improve VIP’s ability for self-guidance. Hereby, we introduce the design and implementation of a visually assistive device, Anajaa – Visual Substitution System (AVSS). This system integrates ultrasonic sensors with custom electronics, and computer vision models (convolutional neural networks), in order to achieve a robust system that acquires information of the surrounding space and transmits it to the user in an intuitive and efficient manner. AVSS consists of two modules: the sensing and the actuation module, which are fitted to a chest mount and belt that communicate via Bluetooth. The sensing module was designed for the acquisition and processing of proximity signals provided by an array of ultrasonic sensors. The distribution of these within the chest mount allows an accurate representation of the surrounding space, discretized in three different levels of proximity, ranging from 0 to 6 meters. Additionally, this module is fitted with an RGB-D camera used to detect potentially threatening obstacles, like staircases, using a convolutional neural network specifically trained for this purpose. Posteriorly, the depth data is used to estimate the distance between the stairs and the user. The information gathered from this module is then sent to the actuation module that creates an HMI, by the means of a 3x2 array of vibration motors that make up the tactile display and allow the system to deliver haptic feedback. The actuation module uses vibrational messages (tactones); changing both in amplitude and frequency to deliver different awareness levels according to the proximity of the obstacle. This enables the system to deliver an intuitive interface. Both modules were tested under lab conditions, and the HMI was additionally tested with a focal group of VIP. The lab testing was conducted in order to establish the processing speed of the computer vision algorithms. This experimentation determined that the model can process 0.59 frames per second (FPS); this is considered as an adequate processing speed taking into account that the walking speed of VIP is 1.439 m/s. In order to test the HMI, we conducted a focal group composed of two females and two males between the ages of 35-65 years. The subject selection was aided by the Colombian Cooperative of Work and Services for the Sightless (COOTRASIN). We analyzed the learning process of the haptic messages throughout five experimentation sessions using two metrics: message discrimination and localization success. These correspond to the ability of the subjects to recognize different tactones and locate them within the tactile display. Both were calculated as the mean across all subjects. Results show that the focal group achieved message discrimination of 70% and a localization success of 80%, demonstrating how the proposed HMI leads to the appropriation and understanding of the feedback messages, enabling the user’s awareness of its surrounding space.

Keywords: computer vision on embedded systems, electronic trave aids, human-machine interface, haptic feedback, visual assistive technologies, vision substitution systems

Procedia PDF Downloads 53
20 Electromyographic Analysis of Biceps Brachii during Golf Swing and Review of Its Impact on Return to Play Following Tendon Surgery

Authors: Amin Masoumiganjgah, Luke Salmon, Julianne Burnton, Fahimeh Bagheri, Gavin Lenton, S. L. Ezekial Tan

Abstract:

Introduction: The incidence of proximal biceps tenodesis and acute distal biceps repair is increasing, and rehabilitation protocols following both are variable. Golf is a popular sport within Australia, and the Gold Coast has become a Mecca for golfers, with more courses per capita than anywhere else in the world. Currently, there are no clear guidelines regarding return to golf play following biceps procedures. The aim of this study was to determine biceps brachii activation during the golf swing through electromyographic analysis, and subsequently, aid in rehabilitation guidelines and return to golf following tenodesis and repair. Methods: Subjects were amateur golfers with no previous upper limb surgery. Surface electromyography (EMG) and high-speed video recording were used to analyse activation of the left and right biceps brachii and the anterior deltoid during the golf swing. Each participant’s maximum voluntary contraction (MVC) was recorded, and they were then required to hit a golf ball aiming for specific distances of 2, 50, 100 and 150 metres at a driving range. Noraxon myoResearch and Matlab were used for data analysis. Mean % MVC was calculated for leading and trailing arms during the full swing and its’ 4 phases: back-swing, acceleration, early follow-through and late follow-through. Results: 12 golfers (2 female and 10 male), participated in the study. Median age was 27 (25 – 38), with all being right handed. Over all distances, the mean activation of the short and long head of biceps brachii was < 10% through the full swing. When breaking down the 50, 100 and 150m swing into phases, mean MVC activation was lowest in backswing (5.1%), followed by acceleration (9.7%), early follow-through (9.2%), and late follow-through (21.4%). There was more variation and slightly higher activation in the right biceps (trailing arm) in backswing, acceleration, and early follow-through; with higher activation in the leading arm in late follow-through (25.4% leading, 17.3% trailing). 2m putts resulted in low MVC values (3.1% ) with little variation across swing phases. There was considerable individual variation in results – one tense subject averaged 11.0% biceps MVC through the 2m putting stroke and others recorded peak mean MVC biceps activations of 68.9% at 50m, 101.3% at 100m, and 111.3% at 150m. Discussion: Previous studies have investigated the role of rotator cuff, spine, and hip muscles during the golf swing however, to our knowledge, this is the first study that investigates the activation of biceps brachii. Many rehabilitation programs following a biceps tenodesis or repair allow active range against gravity and restrict strengthening exercises until 6 weeks, and this does not appear to be associated with any adverse outcome. Previous studies demonstrate a range of < 10% MVC is similar to the unloaded biceps brachii during walking(1), active elbow flexion with the hand positioned either in pronation or supination will produce MVC < 20% throughout range(2) and elbow flexion with a 4kg dumbbell can produce mean MVC’s of around 40%(3). Our study demonstrates that increasing activation is associated with the leading arm, increasing shot distance and the late follow-through phase. Although the cohort mean MVC of the biceps brachii is <10% through the full swing, variability is high and biceps activation reach peak mean MVC’s of over 100% in different swing phases for some individuals. Given these EMG values, caution is advised when advising patients post biceps procedures to return to long distance golf shots, particularly when the leading arm is involved. Even though it would appear that putting would be as safe as having an unloaded hand out of a sling following biceps procedures, the variability of activation patterns across different golfers would lead us to caution against accelerated golf rehabilitation in those who may be particularly tense golfers. The 50m short iron shot was too long to consider as a chip shot and more work can be done in this area to determine the safety of chipping.

Keywords: electromyographic analysis, biceps brachii rupture, golf swing, tendon surgery

Procedia PDF Downloads 57
19 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study

Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre

Abstract:

Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.

Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.

Procedia PDF Downloads 93
18 Saving Lives from a Laptop: How to Produce a Live Virtual Media Briefing That Will Inform, Educate, and Protect Communities in Crisis

Authors: Cory B. Portner, Julie A. Grauert, Lisa M. Stromme, Shelby D. Anderson, Franji H. Mayes

Abstract:

Introduction: WASHINGTON state in the Pacific Northwest of the United States is internationally known for its technology industry, fisheries, agriculture, and vistas. On January 21, 2020, Washington state also became known as the first state with a confirmed COVID-19 case in the United States, thrusting the state into the international spotlight as the world came to grips with the global threat of this disease presented. Tourism is Washington state’s fourth-largest industry. Tourism to the state generates over 1.8 billion dollars (USD) in local and state tax revenue and employs over 180,000 people. Communicating with residents, stakeholders, and visitors on the status of disease activity, prevention measures, and response updates was vital to stopping the pandemic and increasing compliance and awareness. Significance: In order to communicate vital public health updates, guidance implementation, and safety measures to the public, the Washington State Department of Health established routine live virtual media briefings to reach audiences via social media, internet television, and broadcast television. Through close partnership with regional broadcast news stations and the state public affairs news network, the Washington State Department of Health hosted 95 media briefings from January 2020 through September 2022 and continues to regularly host live virtual media briefings to accommodate the needs of the public and media. Methods: Our methods quickly evolved from hosting briefings in the cement closet of a military base to being able to produce and stream the briefings live from any home-office location. The content was tailored to the hot topic of the day and to the reporter's questions and needs. Virtual media briefings hosted through inexpensive or free platforms online are extremely cost-effective: the only mandatory components are WiFi, a laptop, and a monitor. There is no longer a need for a fancy studio or expensive production software to achieve the goal of communicating credible, reliable information promptly. With minimal investment and a small learning curve, facilitators and panelists are able to host highly produced and engaging media availabilities from their living rooms. Results: The briefings quickly developed a reputation as the best source for local and national journalists to get the latest and most factually accurate information about the pandemic. In the height of the COVID-19 response, 135 unique media outlets logged on to participate in the briefing. The briefings typically featured 4-5 panelists, with as many as 9 experts in attendance to provide information and respond to media questions. Preparation was always a priority: Public Affairs staff for the Washington State Department of Health produced over 170 presenter remarks, including guidance on talking points for 63 expert guest panelists. Implication For Practice: Information is today’s most valuable currency. The ability to disseminate correct information urgently and on a wide scale is the most effective tool in crisis communication. Due to our role as the first state with a confirmed COVID-19 case, we were forced to develop the most accurate and effective way to get life-saving information to the public. The cost-effective, web-based methods we developed can be applied in any crisis to educate and protect communities under threat, ultimately saving lives from a laptop.

Keywords: crisis communications, public relations, media management, news media

Procedia PDF Downloads 154
17 Effects of Applying Low-Dye Taping in Performing Double-Leg Squat on Electromyographic Activity of Lower Extremity Muscles for Collegiate Basketball Players with Excessive Foot Pronation

Authors: I. M. K. Ho, S. K. Y. Chan, K. H. P. Lam, G. M. W. Tong, N. C. Y. Yeung, J. T. C. Luk

Abstract:

Low-dye taping (LDT) is commonly used for treating foot problems, such as plantar fasciitis, and supporting foot arch for runners and non-athletes patients with pes planus. The potential negative impact of pronated feet leading to tibial and femoral internal rotation via the entire kinetic chain reaction was postulated and identified. The changed lower limb biomechanics potentially leading to poor activation of hip and knee stabilizers, such as gluteus maximus and medius, may associate with higher risk of knee injuries including patellofemoral pain syndrome and ligamentous sprain in many team sports players. It is therefore speculated that foot arch correction with LDT might enhance the use of gluteal muscles. The purpose of this study was to investigate the effect of applying LDT on surface electromyographic (sEMG) activity of superior gluteus maximus (SGMax), inferior gluteus maximus (IGMax), gluteus medius (GMed) and tibialis anterior (TA) during double-leg squat. 12 male collegiate basketball players (age: 21.72.5 years; body fat: 12.43.6%; navicular drop: 13.72.7mm) with at least three years regular basketball training experience participated in this study. Participants were excluded if they had recent history of lower limb injuries, over 16.6% body fat and lesser than 10mm drop in navicular drop (ND) test. Recruited subjects visited the laboratory once for the within-subject crossover study. Maximum voluntary isometric contraction (MVIC) tests on all selected muscles were performed in randomized order followed by sEMG test on double-leg squat during LDT and non-LDT conditions in counterbalanced order. SGMax, IGMax, GMed and TA activities during the entire 2-second concentric and 2-second eccentric phases were normalized and interpreted as %MVIC. The magnitude of the difference between taped and non-taped conditions of each muscle was further assessed via standardized effect90% confidence intervals (CI) with non-clinical magnitude-based inference. Paired samples T-test showed a significant decrease (4.71.4mm) in ND (95% CI: 3.8, 5.6; p < 0.05) while no significant difference was observed between taped and non-taped conditions in sEMG tests for all muscles and contractions (p > 0.05). On top of traditional significant testing, magnitude-based inference showed possibly increase in IGMax activity (small standardized effect: 0.270.44), likely increase in GMed activity (small standardized effect: 0.340.34) and possibly increase in TA activity (small standardized effect: 0.220.29) during eccentric phase. It is speculated that the decrease of navicular drop supported by LDT application could potentially enhance the use of inferior gluteus maximus and gluteus medius especially during eccentric phase in this study. As the eccentric phase of double-leg squat is an important component of landing activities in basketball, further studies on the onset and amount of gluteal activation during jumping and landing activities with LDT are recommended. Since both hip and knee kinematics were not measured in this study, the underlying cause of the observed increase in gluteal activation during squat after LDT is inconclusive. In this regard, the investigation of relationships between LDT application, ND, hip and knee kinematics, and gluteal muscle activity during sports specific jumping and landing tasks should be focused in the future.

Keywords: flat foot, gluteus maximus, gluteus medius, injury prevention

Procedia PDF Downloads 135
16 Thematic Analysis of Ramayana Narrative Scroll Paintings: A Need for Knowledge Preservation

Authors: Shatarupa Thakurta Roy

Abstract:

Along the limelight of mainstream academic practices in Indian art, exist a significant lot of habitual art practices that are mutually susceptible in their contemporary forms. Narrative folk paintings of regional India has successfully dispersed to its audience social messages through pulsating pictures and orations. The paper consists of images from narrative scroll paintings on ‘Ramayana’ theme from various neighboring states as well as districts in India, describing their subtle differences in style of execution, method, and use of material. Despite sharing commonness in the choice of subject matter, habitual and ceremonial Indian folk art in its formative phase thrived within isolated locations to yield in remarkable variety in the art styles. The differences in style took place district wise, cast wise and even gender wise. An open flow is only evident in the contemporary expressions as a result of substantial changes in social structures, mode of communicative devices, cross-cultural exposures and multimedia interactivities. To decipher the complex nature of popular cultural taste of contemporary India it is important to categorically identify its root in vernacular symbolism. The realization of modernity through European primitivism was rather elevated as a perplexed identity in Indian cultural margin in the light of nationalist and postcolonial ideology. To trace the guiding factor that has still managed to obtain ‘Indianness’ in today’s Indian art, researchers need evidences from the past that are yet to be listed in most instances. They are commonly created on ephemeral foundations. The artworks are also found in endangered state and hence, not counted much friendly for frequent handling. The museums are in dearth of proper technological guidelines to preserve them. Even though restoration activities are emerging in the country, the existing withered and damaged artworks are in threat to perish. An immediacy of digital achieving is therefore envisioned as an alternative to save this cultural legacy. The method of this study is, two folded. It primarily justifies the richness of the evidences by conducting categorical aesthetic analysis. The study is supported by comments on the stylistic variants, thematic aspects, and iconographic identities alongside its anthropological and anthropomorphic significance. Further, it explores the possible ways of cultural preservation to ensure cultural sustainability that includes technological intervention in the form of digital transformation as an altered paradigm for better accessibility to the available recourses. The study duly emphasizes on visual description in order to culturally interpret and judge the rare visual evidences following Feldman’s four-stepped method of formal analysis combined with thematic explanation. A habitual design that emerges and thrives within complex social circumstances may experience change placing its principle philosophy at risk by shuffling and altering with time. A tradition that respires in the modern setup struggles to maintain timeless values that operate its creative flow. Thus, the paper hypothesizes the survival and further growth of this practice within the dynamics of time and concludes in realization of the urgency to transform the implicitness of its knowledge into explicit records.

Keywords: aesthetic, identity, implicitness, paradigm

Procedia PDF Downloads 344