Search results for: gauge repeatability and reproducibility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 438

Search results for: gauge repeatability and reproducibility

48 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 32
47 On-Site Coaching on Freshly-Graduated Nurses to Improves Quality of Clinical Handover and to Avoid Clinical Error

Authors: Sau Kam Adeline Chan

Abstract:

World Health Organization had listed ‘Communication during Patient Care Handovers’ as one of its highest 5 patient safety initiatives. Clinical handover means transfer of accountability and responsibility of clinical information from one health professional to another. The main goal of clinical handover is to convey patient’s current condition and treatment plan accurately. Ineffective communication at point of care is globally regarded as the main cause of the sentinel event. Situation, Background, Assessment and Recommendation (SBAR), a communication tool, is extensively regarded as an effective communication tool in healthcare setting. Nonetheless, just by scenario-based program in nursing school or attending workshops on SBAR would not be enough for freshly graduated nurses to apply it competently in a complex clinical practice. To what extend and in-depth of information should be conveyed during handover process is not easy to learn. As such, on-site coaching is essential to upgrade their expertise on the usage of SBAR and ultimately to avoid any clinical error. On-site coaching for all freshly graduated nurses on the usage of SBAR in clinical handover was commenced in August 2014. During the preceptorship period, freshly graduated nurses were coached by the preceptor. After that, they were gradually assigned to take care of a group of patients independently. Nurse leaders would join in their shift handover process at patient’s bedside. Feedback and support were given to them accordingly. Discrepancies on their clinical handover process were shared with them and documented for further improvement work. Owing to the constraint of manpower in nurse leader, about coaching for 30 times were provided to a nurse in a year. Staff satisfaction survey was conducted to gauge their feelings about the coaching and look into areas for further improvement. Number of clinical error avoided was documented as well. The nurses reported that there was a significant improvement particularly in their confidence and knowledge in clinical handover process. In addition, the sense of empowerment was developed when liaising with senior and experienced nurses. Their proficiency in applying SBAR was enhanced and they become more alert to the critical criteria of an effective clinical handover. Most importantly, accuracy of transferring patient’s condition was improved and repetition of information was avoided. Clinical errors were prevented and quality patient care was ensured. Using SBAR as a communication tool looks simple. The tool only provides a framework to guide the handover process. Nevertheless, without on-site training, loophole on clinical handover still exists, patient’s safety will be affected and clinical error still happens.

Keywords: freshly graduated nurse, competency of clinical handover, quality, clinical error

Procedia PDF Downloads 124
46 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators

Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín

Abstract:

Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.

Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator

Procedia PDF Downloads 182
45 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine

Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo

Abstract:

The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.

Keywords: copper-gold, DMLZ, skarn, structure

Procedia PDF Downloads 475
44 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 13
43 The Impact of Information and Communications Technology (ICT)-Enabled Service Adaptation on Quality of Life: Insights from Taiwan

Authors: Chiahsu Yang, Peiling Wu, Ted Ho

Abstract:

From emphasizing economic development to stressing public happiness, the international community mainly hopes to be able to understand whether the quality of life for the public is becoming better. The Better Life Index (BLI) constructed by OECD uses living conditions and quality of life as starting points to cover 11 areas of life and to convey the state of the general public’s well-being. In light of the BLI framework, the Directorate General of Budget, Accounting and Statistics (DGBAS) of the Executive Yuan instituted the Gross National Happiness Index to understand the needs of the general public and to measure the progress of the aforementioned conditions in residents across the island. Whereas living conditions consist of income and wealth, jobs and earnings, and housing conditions, health status, work and life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security. The ICT area consists of health care, living environment, ICT-enabled communication, transportation, government, education, pleasure, purchasing, job & employment. In the wake of further science and technology development, rapid formation of information societies, and closer integration between lifestyles and information societies, the public’s well-being within information societies has indeed become a noteworthy topic. the Board of Science and Technology of the Executive Yuan use the OECD’s BLI as a reference in the establishment of the Taiwan-specific ICT-Enabled Better Life Index. Using this index, the government plans to examine whether the public’s quality of life is improving as well as measure the public’s satisfaction with current digital quality of life. This understanding will enable the government to gauge the degree of influence and impact that each dimension of digital services has on digital life happiness while also serving as an important reference for promoting digital service development. The content of the ICT Enabled Better Life Index. Information and communications technology (ICT) has been affecting people’s living styles, and further impact people’s quality of life (QoL). Even studies have shown that ICT access and usage have both positive and negative impact on life satisfaction and well-beings, many governments continue to invest in e-government programs to initiate their path to information society. This research is the few attempts to link the e-government benchmark to the subjective well-being perception, and further address the gap between user’s perception and existing hard data assessment, then propose a model to trace measurement results back to the original public policy in order for policy makers to justify their future proposals.

Keywords: information and communications technology, quality of life, satisfaction, well-being

Procedia PDF Downloads 324
42 Operation System for Aluminium-Air Cell: A Strategy to Harvest the Energy from Secondary Aluminium

Authors: Binbin Chen, Dennis Y. C. Leung

Abstract:

Aluminium (Al) -air cell holds a high volumetric capacity density of 8.05 Ah cm-3, benefit from the trivalence of Al ions. Additional benefits of Al-air cell are low price and environmental friendliness. Furthermore, the Al energy conversion process is characterized of 100% recyclability in theory. Along with a large base of raw material reserve, Al attracts considerable attentions as a promising material to be integrated within the global energy system. However, despite the early successful applications in military services, several problems exist that prevent the Al-air cells from widely civilian use. The most serious issue is the parasitic corrosion of Al when contacts with electrolyte. To overcome this problem, super-pure Al alloyed with various traces of metal elements are used to increase the corrosion resistance. Nevertheless, high-purity Al alloys are costly and require high energy consumption during production process. An alternative approach is to add inexpensive inhibitors directly into the electrolyte. However, such additives would increase the internal ohmic resistance and hamper the cell performance. So far these methods have not provided satisfactory solutions for the problem within Al-air cells. For the operation of alkaline Al-air cell, there are still other minor problems. One of them is the formation of aluminium hydroxide in the electrolyte. This process decreases ionic conductivity of electrolyte. Another one is the carbonation process within the gas diffusion layer of cathode, blocking the porosity of gas diffusion. Both these would hinder the performance of cells. The present work optimizes the above problems by building an Al-air cell operation system, consisting of four components. A top electrolyte tank containing fresh electrolyte is located at a high level, so that it can drive the electrolyte flow by gravity force. A mechanical rechargeable Al-air cell is fabricated with low-cost materials including low grade Al, carbon paper, and PMMA plates. An electrolyte waste tank with elaborate channel is designed to separate the hydrogen generated from the corrosion, which would be collected by gas collection device. In the first section of the research work, we investigated the performance of the mechanical rechargeable Al-air cell with a constant flow rate of electrolyte, to ensure the repeatability experiments. Then the whole system was assembled together and the feasibility of operating was demonstrated. During experiment, pure hydrogen is collected by collection device, which holds potential for various applications. By collecting this by-product, high utilization efficiency of aluminum is achieved. Considering both electricity and hydrogen generated, an overall utilization efficiency of around 90 % or even higher under different working voltages are achieved. Fluidic electrolyte could remove aluminum hydroxide precipitate and solve the electrolyte deterioration problem. This operation system provides a low-cost strategy for harvesting energy from the abundant secondary Al. The system could also be applied into other metal-air cells and is suitable for emergency power supply, power plant and other applications. The low cost feature implies great potential for commercialization. Further optimization, such as scaling up and optimization of fabrication, will help to refine the technology into practical market offerings.

Keywords: aluminium-air cell, high efficiency, hydrogen, mechanical recharge

Procedia PDF Downloads 251
41 Cost Efficient Receiver Tube Technology for Eco-Friendly Concentrated Solar Thermal Applications

Authors: M. Shiva Prasad, S. R. Atchuta, T. Vijayaraghavan, S. Sakthivel

Abstract:

The world is in need of efficient energy conversion technologies which are affordable, accessible, and sustainable with eco-friendly nature. Solar energy is one of the cornerstones for the world’s economic growth because of its abundancy with zero carbon pollution. Among the various solar energy conversion technologies, solar thermal technology has attracted a substantial renewed interest due to its diversity and compatibility in various applications. Solar thermal systems employ concentrators, tracking systems and heat engines for electricity generation which lead to high cost and complexity in comparison with photovoltaics; however, it is compatible with distinct thermal energy storage capability and dispatchable electricity which creates a tremendous attraction. Apart from that, employing cost-effective solar selective receiver tube in a concentrating solar thermal (CST) system improves the energy conversion efficiency and directly reduces the cost of technology. In addition, the development of solar receiver tubes by low cost methods which can offer high optical properties and corrosion resistance in an open-air atmosphere would be beneficial for low and medium temperature applications. In this regard, our work opens up an approach which has the potential to achieve cost-effective energy conversion. We have developed a highly selective tandem absorber coating through a facile wet chemical route by a combination of chemical oxidation, sol-gel, and nanoparticle coating methods. The developed tandem absorber coating has gradient refractive index nature on stainless steel (SS 304) and exhibited high optical properties (α ≤ 0.95 & ε ≤ 0.14). The first absorber layer (Cr-Mn-Fe oxides) developed by controlled oxidation of SS 304 in a chemical bath reactor. A second composite layer of ZrO2-SiO2 has been applied on the chemically oxidized substrate by So-gel dip coating method to serve as optical enhancing and corrosion resistant layer. Finally, an antireflective layer (MgF2) has been deposited on the second layer, to achieve > 95% of absorption. The developed tandem layer exhibited good thermal stability up to 250 °C in open air atmospheric condition and superior corrosion resistance (withstands for > 200h in salt spray test (ASTM B117)). After the successful development of a coating with targeted properties at a laboratory scale, a prototype of the 1 m tube has been demonstrated with excellent uniformity and reproducibility. Moreover, it has been validated under standard laboratory test condition as well as in field condition with a comparison of the commercial receiver tube. The presented strategy can be widely adapted to develop highly selective coatings for a variety of CST applications ranging from hot water, solar desalination, and industrial process heat and power generation. The high-performance, cost-effective medium temperature receiver tube technology has attracted many industries, and recently the technology has been transferred to Indian industry.

Keywords: concentrated solar thermal system, solar selective coating, tandem absorber, ultralow refractive index

Procedia PDF Downloads 71
40 Oscillating Water Column Wave Energy Converter with Deep Water Reactance

Authors: William C. Alexander

Abstract:

The oscillating water column (OSC) wave energy converter (WEC) with deep water reactance (DWR) consists of a large hollow sphere filled with seawater at the base, referred to as the ‘stabilizer’, a hollow cylinder at the top of the device, with a said cylinder having a bottom open to the sea and a sealed top save for an orifice which leads to an air turbine, and a long, narrow rod connecting said stabilizer with said cylinder. A small amount of ballast at the bottom of the stabilizer and a small amount of floatation in the cylinder keeps the device upright in the sea. The floatation is set such that the mean water level is nominally halfway up the cylinder. The entire device is loosely moored to the seabed to keep it from drifting away. In the presence of ocean waves, seawater will move up and down within the cylinder, producing the ‘oscillating water column’. This gives rise to air pressure within the cylinder alternating between positive and negative gauge pressure, which in turn causes air to alternately leave and enter the cylinder through said top-cover situated orifice. An air turbine situated within or immediately adjacent to said orifice converts the oscillating airflow into electric power for transport to shore or elsewhere by electric power cable. Said oscillating air pressure produces large up and down forces on the cylinder. Said large forces are opposed through the rod to the large mass of water retained within the stabilizer, which is located deep enough to be mostly free of any wave influence and which provides the deepwater reactance. The cylinder and stabilizer form a spring-mass system which has a vertical (heave) resonant frequency. The diameter of the cylinder largely determines the power rating of the device, while the size (and water mass within) of the stabilizer determines said resonant frequency. Said frequency is chosen to be on the lower end of the wave frequency spectrum to maximize the average power output of the device over a large span of time (such as a year). The upper portion of the device (the cylinder) moves laterally (surge) with the waves. This motion is accommodated with minimal loading on the said rod by having the stabilizer shaped like a sphere, allowing the entire device to rotate about the center of the stabilizer without rotating the seawater within the stabilizer. A full-scale device of this type may have the following dimensions. The cylinder may be 16 meters in diameter and 30 meters high, the stabilizer 25 meters in diameter, and the rod 55 meters long. Simulations predict that this will produce 1,400 kW in waves of 3.5-meter height and 12 second period, with a relatively flat power curve between 5 and 16 second wave periods, as will be suitable for an open-ocean location. This is nominally 10 times higher power than similar-sized WEC spar buoys as reported in the literature, and the device is projected to have only 5% of the mass per unit power of other OWC converters.

Keywords: oscillating water column, wave energy converter, spar bouy, stabilizer

Procedia PDF Downloads 78
39 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal

Authors: Linta Rose, Prasad K. Bhaskaran

Abstract:

Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.

Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind

Procedia PDF Downloads 192
38 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada

Authors: S. Chowdhury, A. Corlett

Abstract:

Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.

Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation

Procedia PDF Downloads 107
37 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer

Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin

Abstract:

Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use “eye balling” method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 µm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVue™ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mm² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.

Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes

Procedia PDF Downloads 80
36 Strategies for Public Space Utilization

Authors: Ben Levenger

Abstract:

Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.

Keywords: downtown, economic development, planning, strategic

Procedia PDF Downloads 41
35 Building Community through Discussion Forums in an Online Accelerated MLIS Program: Perspectives of Instructors and Students

Authors: Mary H Moen, Lauren H. Mandel

Abstract:

Creating a sense of community in online learning is important for student engagement and success. The integration of discussion forums within online learning environments presents an opportunity to explore how this computer mediated communications format can cultivate a sense of community among students in accelerated master’s degree programs. This research has two aims, to delve into the ways instructors utilize this communications technology to create community and to understand the feelings and experiences of graduate students participating in these forums in regard to its effectiveness in community building. This study is a two-phase approach encompassing qualitative and quantitative methodologies. The data will be collected at an online accelerated Master of Library and Information Studies program at a public university in the northeast of the United States. Phase 1 is a content analysis of the syllabi from all courses taught in the 2023 calendar year, which explores the format and rules governing discussion forum assignments. Four to six individual interviews of department faculty and part time faculty will also be conducted to illuminate their perceptions of the successes and challenges of their discussion forum activities. Phase 2 will be an online survey administered to students in the program during the 2023 calendar year. Quantitative data will be collected for statistical analysis, and short answer responses will be analyzed for themes. The survey is adapted from the Classroom Community Scale Short-Form (CSS-SF), which measures students' self-reported responses on their feelings of connectedness and learning. The prompts will contextualize the items from their experience in discussion forums during the program. Short answer responses on the challenges and successes of using discussion forums will be analyzed to gauge student perceptions and experiences using this type of communication technology in education. This research study is in progress. The authors anticipate that the findings will provide a comprehensive understanding of the varied approaches instructors use in discussion forums for community-building purposes in an accelerated MLIS program. They predict that the more varied, flexible, and consistent student uses of discussion forums are, the greater the sense of community students will report. Additionally, students’ and instructors’ perceptions and experiences within these forums will shed light on the successes and challenges faced, thereby offering valuable recommendations for enhancing online learning environments. The findings are significant because they can contribute actionable insights for instructors, educational institutions, and curriculum designers aiming to optimize the use of discussion forums in online accelerated graduate programs, ultimately fostering a richer and more engaging learning experience for students.

Keywords: accelerated online learning, discussion forums, LIS programs, sense of community, g

Procedia PDF Downloads 33
34 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes

Authors: Nadarajah I. Ramesh

Abstract:

Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.

Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model

Procedia PDF Downloads 251
33 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues

Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos

Abstract:

Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.

Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints

Procedia PDF Downloads 68
32 Significant Growth in Expected Muslim Inbound Tourists in Japan Towards 2020 Tokyo Olympic and Still Incipient Stage of Current Halal Implementations in Hiroshima

Authors: Kyoko Monden

Abstract:

Tourism has moved to the forefront of national attention in Japan since September of 2013 when Tokyo won its bid to host the 2020 Summer Olympics. The number of foreign tourists has continued to break records, reaching 13.4 million in 2014, and is now expected to hit 20 million sooner than initially targeted 2020 due to government stimulus promotions; an increase in low cost carriers; the weakening of the Japanese yen, and strong economic growth in Asia. The tourism industry can be an effective trigger in Japan’s economic recovery as foreign tourists spent two trillion yen ($16.6 million) in Japan in 2014. In addition, 81% of them were all from Asian countries, and it is essential to know that 68.9% of the world’s Muslims, about a billion people, live in South and Southeast Asia. An important question is ‘Do Muslim tourists feel comfortable traveling in Japan?’ This research was initiated by an encounter with Muslim visitors in Hiroshima, a popular international tourist destination, who said they had found very few suitable restaurants in Hiroshima. The purpose of this research is to examine halal implementation in Hiroshima and suggest the next steps to be taken to improve current efforts. The goal will be to provide anyone, Muslims included, with first class hospitality in the near future in preparation for the massive influx of foreign tourists in 2020. The methods of this research were questionnaires, face-to-face interviews, phone interviews, and internet research. First, this research aims to address the significance of growing inbound tourism in Japan, especially the expected growth in Muslim tourists. Additionally, it should address the strong popularity of eating Japanese foods in Asian Muslim countries and as ranked no. 1 thing foreign tourists want to do in Japan. Secondly, the current incipient stage of Hiroshima’s halal implementation at hotels, restaurants, and major public places were exposed, and the existing action plans by Hiroshima Prefecture Government were presented. Furthermore, two surveys were conducted to clarify basic halal awareness of local residents in Hiroshima, and to gauge the inconveniences Muslims living in Hiroshima faced. Thirdly, the reasons for this lapse were observed and compared to the benchmarking data of other major tourist sites, Hiroshima’s halal implementation plans were proposed. The conclusion is, despite increasing demands and interests in halal-friendly businesses, overall halal actions have barely been applied in Hiroshima. 76% of Hiroshima residents had no idea what halal or halaal meant. It is essential to increase halal awareness and its importance to the economy and to launch further actions to make Muslim tourists feel welcome in Hiroshima and the entire country.

Keywords: halaal, halal implementation, Hiroshima, inbound tourists in Japan

Procedia PDF Downloads 190
31 Progress Towards Optimizing and Standardizing Fiducial Placement Geometry in Prostate, Renal, and Pancreatic Cancer

Authors: Shiva Naidoo, Kristena Yossef, Grimm Jimm, Mirza Wasique, Eric Kemmerer, Joshua Obuch, Anand Mahadevan

Abstract:

Background: Fiducial markers effectively enhance tumor target visibility prior to Stereotactic Body Radiation Therapy or Proton therapy. To streamline clinical practice, fiducial placement guidelines from a robotic radiosurgery vendor were examined with the goals of optimizing and standardizing feasible geometries for each treatment indication. Clinical examples of prostate, renal, and pancreatic cases are presented. Methods: Vendor guidelines (Accuray, Sunnyvale, Ca) suggest implantation of 4–6 fiducials at least 20 mm apart, with at least a 15-degree angular difference between fiducials, within 50 mm or less from the target centroid, to ensure that any potential fiducial motion (e.g., from respiration or abdominal/pelvic pressures) will mimic target motion. Also recommended is that all fiducials can be seen in 45-degree oblique views with no overlap to coincide with the robotic radiosurgery imaging planes. For the prostate, a standardized geometry that meets all these objectives is a 2 cm-by-2 cm square in the coronal plane. The transperineal implant of two pairs of preloaded tandem fiducials makes the 2 cm-by-2 cm square geometry clinically feasible. This technique may be applied for renal cancer, except repositioned in a sagittal plane, with the retroperitoneal placement of the fiducials into the tumor. Pancreatic fiducial placement via endoscopic ultrasound (EUS) is technically more challenging, as fiducial placement is operator-dependent, and lesion access may be limited by adjacent vasculature, tumor location, or restricted mobility of the EUS probe in the duodenum. Fluoroscopically assisted fiducial placement during EUS can help ensure fiducial markers are deployed with optimal geometry and visualization. Results: Among the first 22 fiducial cases on a newly installed robotic radiosurgery system, live x-ray images for all nine prostatic cases had excellent fiducial visualization at the treatment console. Renal and pancreatic fiducials were not as clearly visible due to difficult target access and smaller caliber insertion needle/fiducial usage. The geometry of the first prostate case was used to ensure accurate geometric marker placement for the remaining 8 cases. Initially, some of the renal and pancreatic fiducials were closer than the 20 mm recommendation, and interactive feedback with the proceduralists led to subsequent fiducials being too far to the edge of the tumor. Further feedback and discussion of all cases are being used to help guide standardized geometries and achieve ideal fiducial placement. Conclusion: The ideal tradeoffs of fiducial visibility versus the thinnest possible gauge needle to avoid complications needs to be systematically optimized among all patients, particularly in regards to body habitus. Multidisciplinary collaboration among proceduralists and radiation oncologists can lead to improved outcomes.

Keywords: fiducial, prostate cancer, renal cancer, pancreatic cancer, radiotherapy

Procedia PDF Downloads 66
30 Stability Assessment of Underground Power House Encountering Shear Zone: Sunni Dam Hydroelectric Project (382 MW), India

Authors: Sanjeev Gupta, Ankit Prabhakar, K. Rajkumar Singh

Abstract:

Sunni Dam Hydroelectric Project (382 MW) is a run of river type development with an underground powerhouse, proposed to harness the hydel potential of river Satluj in Himachal Pradesh, India. The project is located in the inner lesser Himalaya between Dhauladhar Range in the south and the higher Himalaya in the north. The project comprises two large underground caverns, a Powerhouse cavern (171m long, 22.5m wide and 51.2m high) and another transformer hall cavern (175m long, 18.7m wide and 27m high) and the rock pillar between the two caverns is 50m. The highly jointed, fractured, anisotropic rock mass is a key challenge in Himalayan geology for an underground structure. The concern for the stability of rock mass increases when weak/shear zones are encountered in the underground structure. In the Sunni Dam project, 1.7m to 2m thick weak/shear zone comprising of deformed, weak material with gauge has been encountered in powerhouse cavern at 70m having dip direction 325 degree and dip amount 38 degree which also intersects transformer hall at initial reach. The rock encountered in the powerhouse area is moderate to highly jointed, pink quartz arenite belonging to the Khaira Formation, a transition zone comprising of alternate grey, pink & white quartz arenite and shale sequence and dolomite at higher reaches. The rock mass is intersected by mainly 3 joint sets excluding bedding joints and a few random joints. The rock class in powerhouse mainly varies from poor class (class IV) to lower order fair class (class III) and in some reaches, very poor rock mass has also been encountered. To study the stability of the underground structure in weak/shear rock mass, a 3D numerical model analysis has been carried out using RS3 software. Field studies have been interpreted and analysed to derive Bieniawski’s RMR, Barton’s “Q” class and Geological Strength Index (GSI). The various material parameters, in-situ characteristics have been determined based on tests conducted by Central Soil and Materials Research Station, New Delhi. The behaviour of the cavern has been studied by assessing the displacement contours, major and minor principal stresses and plastic zones for different stage excavation sequences. For optimisation of the support system, the stability of the powerhouse cavern with different powerhouse orientations has also been studied. The numerical modeling results indicate that cavern will not likely face stress governed by structural instability with the support system to be applied to the crown and side walls.

Keywords: 3D analysis, Himalayan geology, shear zone, underground power house

Procedia PDF Downloads 58
29 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 94
28 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 112
27 Basic Characteristics of Synchronized Stir Welding and Its Prospects

Authors: Ipei Sato, Naonori Shibata, Shoji Matsumoto, Naruhito Matsumoto

Abstract:

Friction stir welding (FSW) has been widely used in the automotive, aerospace, and high-tech industries due to its superiority in mechanical properties after joining. In order to achieve a good quality joint by friction stir welding (FSW), it is necessary to secure an advanced angle (usually 3 to 5 degrees) using a dedicated FSW machine and to join on a highly rigid machine. On the other hand, although recently, a new combined machine that combines the cutting function of a conventional machining center with the FSW function has appeared on the market, its joining process window is small, so joining defects easily occur, and it lacks reproducibility, which limits its application to the automotive industry, where control accuracy is required. This has limited the use of FSW machines in the automotive industry, where control accuracy is required. FSW-only machines or hybrid equipment that combines FSW and cutting machines require high capital investment costs, which is one of the reasons why FSW itself has not penetrated the market. Synchronized stir welding, a next-generation joining technology developed by our company, requires no tilt angle and is a very cost-effective method of welding. It is a next-generation joining technology that does not require a tilt angle, does not require a complicated spindle mechanism, and minimizes the load and vibration on the spindle, temperature during joining, and shoulder diameter, thereby enabling a wide range of joining conditions and high-strength, high-speed joining with no joining defects. In synchronized stir welding, the tip of the joining tool is "driven by microwaves" in both the rotational and vertical directions of the tool. The tool is synchronized and stirred in the direction and at the speed required by the material to be stirred in response to the movement required by the material to be welded, enabling welding that exceeds conventional concepts. Conventional FSW is passively stirred by an external driving force, resulting in low joining speeds and high heat input due to the need for a large shoulder diameter. In contrast, SSW is characterized by the fact that materials are actively stirred in synchronization with the direction and speed in which they are to be stirred, resulting in a high joining speed and a small shoulder diameter, which allows joining to be completed with low heat input. The advantages of synchronized stir welding technology in terms of basic mechanical properties are described. The superiority of the basic mechanical properties of SSW over FSW was evaluated as a comparison of the strength of the joint cross section in the comparison between FSW and SSW. SSW, compared to FSW, has tensile strength; base metal 242 MPa/217 MPa after FSW 89%, base metal 242 MPa/225 MPa after SSW 93%. Vickers hardness; base metal 75.0HV/FSW; 57.5HV 76% SSW; 66.0HV 88% (weld center), showing excellent results. In the tensile test, the material used was aluminum (A5052-H112) plate 5 mm thick, and the specimen was dumbbell-shaped, 2 mm thick, 4 mm wide, and 60 mm long. Measurements were made at a loading speed of 20%/min (in accordance with Z 2241:2022). Tensile testing machine: INSTRON Japan, model: INSTRON 5982. Vickers hardness was measured on a 5 mm thick specimen of A5052 tempered H112 with a width of 15 mm at 0.3 pitch (in accordance with JIS Z 2244:2020). Vickers tester: FUTURE-TECH Model: FM-300.

Keywords: FSW, SSW, synchronized stir welding, requires no tilt angles, running peak temperature less than 100 degrees C

Procedia PDF Downloads 22
26 Transforming Gender Norms through Play: Qualitative Findings from Primary Schools in Rwanda, Ghana, and Mozambique

Authors: Geetanjali Gill

Abstract:

International non-governmental organizations (INGOs) and development assistance donors have been implementing education projects in Sub-Saharan Africa and the global South that respond to gender-based inequities and that attempt to transform socio-cultural norms for greater gender equality in schools and communities. These efforts are in line with the United Nations Sustainable Development Goal number four, quality education, and goal number five, gender equality. Some INGOs and donors have also championed the use of play-based pedagogies for improved and more gender equal education outcomes. The study used the qualitative methods of life history interviews and focus groups to gauge social norm change amongst male and female adolescents, families, and teachers in primary schools that have been using gender-responsive play-based pedagogies in Rwanda, Ghana, and Mozambique. School administrators and project managers from the INGO Right to Play International were consulted in the selection of two primary schools per country (in both rural and urban contexts), as well as the selection of ten male and ten female students in grades four to six in each school, using specific parameters of social norm adherence. The parents (or guardians) and grandparents of four male and four female students in each school who were determined to be ‘outliers’ in their adherence to social norms were also interviewed. Additionally, sex-specific focus groups were held with thirty-six teachers. The study found that gender-responsive play-based pedagogies positively impactedsocio-cultural norms that shape gender relations amongst adolescents, their families, and teachers. Female and male students who spoke about their beliefs about gender equality in the roles and educational and career aspirations of men/boys and women/girls made linkages to the play-based pedagogies and approaches used by their teachers. Also, the parents and grandparents of these students were aware of generational differences in gender norms, and many were accepting of changed gender norms. Teachers who were effectively implementing gender-responsive play-based pedagogies in their classrooms spoke about changes to their own gender norms and how they were able to influence the gender norms of parents and community members. Life history interviews were found to be well-suited for the examination of changes to socio-cultural norms and gender relations. However, an appropriate framing of questions and topics specific to each target group was instrumental for the collection of data on socio-cultural norms and gender. The findings of this study can spur further academic inquiry of linkages between gender norms and education outcomes. The findings are also relevant for the work of INGOs and donors in the global South and for the development of gender-responsive education policies and programs.

Keywords: education, gender equality, ghana, international development, life histories, mozambique, rwanda, socio-cultural norms, sub-saharan africa, qualitative research

Procedia PDF Downloads 155
25 Short and Long Crack Growth Behavior in Ferrite Bainite Dual Phase Steels

Authors: Ashok Kumar, Shiv Brat Singh, Kalyan Kumar Ray

Abstract:

There is growing awareness to design steels against fatigue damage Ferrite martensite dual-phase steels are known to exhibit favourable mechanical properties like good strength, ductility, toughness, continuous yielding, and high work hardening rate. However, dual-phase steels containing bainite as second phase are potential alternatives for ferrite martensite steels for certain applications where good fatigue property is required. Fatigue properties of dual phase steels are popularly assessed by the nature of variation of crack growth rate (da/dN) with stress intensity factor range (∆K), and the magnitude of fatigue threshold (∆Kth) for long cracks. There exists an increased emphasis to understand not only the long crack fatigue behavior but also short crack growth behavior of ferrite bainite dual phase steels. The major objective of this report is to examine the influence of microstructures on the short and long crack growth behavior of a series of developed dual-phase steels with varying amounts of bainite and. Three low carbon steels containing Nb, Cr and Mo as microalloying elements steels were selected for making ferrite-bainite dual-phase microstructures by suitable heat treatments. The heat treatment consisted of austenitizing the steel at 1100°C for 20 min, cooling at different rates in air prior to soaking these in a salt bath at 500°C for one hour, and finally quenching in water. Tensile tests were carried out on 25 mm gauge length specimens with 5 mm diameter using nominal strain rate 0.6x10⁻³ s⁻¹ at room temperature. Fatigue crack growth studies were made on a recently developed specimen configuration using a rotating bending machine. The crack growth was monitored by interrupting the test and observing the specimens under an optical microscope connected to an Image analyzer. The estimated crack lengths (a) at varying number of cycles (N) in different fatigue experiments were analyzed to obtain log da/dN vs. log °∆K curves for determining ∆Kthsc. The microstructural features of these steels have been characterized and their influence on the near threshold crack growth has been examined. This investigation, in brief, involves (i) the estimation of ∆Kthsc and (ii) the examination of the influence of microstructure on short and long crack fatigue threshold. The maximum fatigue threshold values obtained from short crack growth experiments on various specimens of dual-phase steels containing different amounts of bainite are found to increase with increasing bainite content in all the investigated steels. The variations of fatigue behavior of the selected steel samples have been explained with the consideration of varying amounts of the constituent phases and their interactions with the generated microstructures during cyclic loading. Quantitative estimation of the different types of fatigue crack paths indicates that the propensity of a crack to pass through the interfaces depends on the relative amount of the microstructural constituents. The fatigue crack path is found to be predominantly intra-granular except for the ones containing > 70% bainite in which it is predominantly inter-granular.

Keywords: bainite, dual phase steel, fatigue crack growth rate, long crack fatigue threshold, short crack fatigue threshold

Procedia PDF Downloads 184
24 Enhancement to Green Building Rating Systems for Industrial Facilities by Including the Assessment of Impact on the Landscape

Authors: Lia Marchi, Ernesto Antonini

Abstract:

The impact of industrial sites on people’s living environment both involves detrimental effects on the ecosystem and perceptual-aesthetic interferences with the scenery. These, in turn, affect the economic and social value of the landscape, as well as the wellbeing of workers and local communities. Given the diffusion of the phenomenon and the relevance of its effects, it emerges the need for a joint approach to assess and thus mitigate the impact of factories on the landscape –being this latest assumed as the result of the action and interaction of natural and human factors. However, the impact assessment tools suitable for the purpose are quite heterogeneous and mostly monodisciplinary. On the one hand, green building rating systems (GBRSs) are increasingly used to evaluate the performance of manufacturing sites, mainly by quantitative indicators focused on environmental issues. On the other hand, methods to detect the visual and social impact of factories on the landscape are gradually emerging in the literature, but they generally adopt only qualitative gauges. The research addresses the integration of the environmental impact assessment and the perceptual-aesthetic interferences of factories on the landscape. The GBRSs model is assumed as a reference since it is adequate to simultaneously investigate different topics which affect sustainability, returning a global score. A critical analysis of GBRSs relevant to industrial facilities has led to select the U.S. GBC LEED protocol as the most suitable to the scope. A revision of LEED v4 Building Design+Construction has then been provided by including specific indicators to measure the interferences of manufacturing sites with the perceptual-aesthetic and social aspects of the territory. To this end, a new impact category was defined, namely ‘PA - Perceptual-aesthetic aspects’, comprising eight new credits which are specifically designed to assess how much the buildings are in harmony with their surroundings: these investigate, for example the morphological and chromatic harmonization of the facility with the scenery or the site receptiveness and attractiveness. The credits weighting table was consequently revised, according to the LEED points allocation system. As all LEED credits, each new PA credit is thoroughly described in a sheet setting its aim, requirements, and the available options to gauge the interference and get a score. Lastly, each credit is related to mitigation tactics, which are drawn from a catalogue of exemplary case studies, it also developed by the research. The result is a modified LEED scheme which includes compatibility with the landscape within the sustainability assessment of the industrial sites. The whole system consists of 10 evaluation categories, which contain in total 62 credits. Lastly, a test of the tool on an Italian factory was performed, allowing the comparison of three mitigation scenarios with increasing compatibility level. The study proposes a holistic and viable approach to the environmental impact assessment of factories by a tool which integrates the multiple involved aspects within a worldwide recognized rating protocol.

Keywords: environmental impact, GBRS, landscape, LEED, sustainable factory

Procedia PDF Downloads 92
23 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence

Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai

Abstract:

The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.

Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing

Procedia PDF Downloads 224
22 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 212
21 MusicTherapy for Actors: An Exploratory Study Applied to Students from University Theatre Faculty

Authors: Adriana De Serio, Adrian Korek

Abstract:

Aims: This experiential research work presents a Group-MusicTherapy-Theatre-Plan (MusThePlan) the authors have carried out to support the actors. The MusicTherapy gives rise to individual psychophysical feedback and influences the emotional centres of the brain and the subconsciousness. Therefore, the authors underline the effectiveness of the preventive, educational, and training goals of the MusThePlan to lead theatre students and actors to deal with anxiety and to overcome psychophysical weaknesses, shyness, emotional stress in stage performances, to increase flexibility, awareness of one's identity and resources for a positive self-development and psychophysical health, to develop and strengthen social bonds, increasing a network of subjects working for social inclusion and reduction of stigma. Materials-Methods: Thirty students from the University Theatre Faculty participated in weekly music therapy sessions for two months; each session lasted 120 minutes. MusThePlan: Each session began with a free group rhythmic-sonorous-musical-production by body-percussion, voice-canto, instruments, to stimulate communication. Then, a synchronized-structured bodily-rhythmic-sonorous-musical production also involved acting, dances, movements of hands and arms, hearing, and more sensorial perceptions and speech to balance motor skills and the muscular tone. Each student could be the director-leader of the group indicating a story to inspire the group's musical production. The third step involved the students in rhythmic speech and singing drills and in vocal exercises focusing on the musical pitch to improve the intonation and on the diction to improve the articulation and lead up it to an increased intelligibility. At the end of each musictherapy session and of the two months, the Musictherapy Assessment Document was drawn up by analysis of observation protocols and two Indices by the authors: Patient-Environment-Music-Index (time to - tn) to estimate the behavior evolution, Somatic Pattern Index to monitor subject’s eye and mouth and limb motility, perspiration, before, during and after musictherapy sessions. Results: After the first month, the students (non musicians) learned to play percussion instruments and formed a musical band that played classical/modern music on the percussion instruments with the musictherapist/pianist/conductor in a public concert. At the end of the second month, the students performed a public musical theatre show, acting, dancing, singing, and playing percussion instruments. The students highlighted the importance of the playful aspects of the group musical production in order to achieve emotional contact and harmony within the group. The students said they had improved kinetic and vocal and all the skills useful for acting activity and the nourishment of the bodily and emotional balance. Conclusions: The MusThePlan makes use of some specific MusicTherapy methodological models, techniques, and strategies useful for the actors. The MusThePlan can destroy the individual "mask" and can be useful when the verbal language is unable to undermine the defense mechanisms of the subject. The MusThePlan improves actor’s psychophysical activation, motivation, gratification, knowledge of one's own possibilities, and the quality of life. Therefore, the MusThePlan could be useful to carry out targeted interventions for the actors with characteristics of repeatability, objectivity, and predictability of results. Furthermore, it would be useful to plan a University course/master in “MusicTherapy for the Theatre”.

Keywords: musictherapy, sonorous-musical energy, quality of life, theatre

Procedia PDF Downloads 39
20 Moral Decision-Making in the Criminal Justice System: The Influence of Gruesome Descriptions

Authors: Michel Patiño-Sáenz, Martín Haissiner, Jorge Martínez-Cotrina, Daniel Pastor, Hernando Santamaría-García, Maria-Alejandra Tangarife, Agustin Ibáñez, Sandra Baez

Abstract:

It has been shown that gruesome descriptions of harm can increase the punishment given to a transgressor. This biasing effect is mediated by negative emotions, which are elicited upon the presentation of gruesome descriptions. However, there is a lack of studies inquiring the influence of such descriptions on moral decision-making in people involved in the criminal justice system. Such populations are of special interest since they have experience dealing with gruesome evidence, but also formal education on how to assess evidence and gauge the appropriate punishment according to the law. Likewise, they are expected to be objective and rational when performing their duty, because their decisions can impact profoundly people`s lives. Considering these antecedents, the objective of this study was to explore the influence gruesome written descriptions on moral decision-making in this group of people. To that end, we recruited attorneys, judges and public prosecutors (Criminal justice group, CJ, n=30) whose field of specialty is criminal law. In addition, we included a control group of people who did not have a formal education in law (n=30), but who were paired in age and years of education with the CJ group. All participants completed an online, Spanish-adapted version of a moral decision-making task, which was previously reported in the literature and also standardized and validated in the Latin-American context. A series of text-based stories describing two characters, one inflicting harm on the other, were presented to participants. Transgressor's intentionality (accidental vs. intentional harm) and language (gruesome vs. plain) used to describe harm were manipulated employing a within-subjects and a between-subjects design, respectively. After reading each story, participants were asked to rate (a) the harmful action's moral adequacy, (b) the amount of punishment deserving the transgressor and (c) how damaging was his behavior. Results showed main effects of group, intentionality and type of language on all dependent measures. In both groups, intentional harmful actions were rated as significantly less morally adequate, were punished more severely and were deemed as more damaging. Moreover, control subjects deemed more damaging and punished more severely any type of action than the CJ group. In addition, there was an interaction between intentionality and group. People in the control group rated harmful actions as less morally adequate than the CJ group, but only when the action was accidental. Also, there was an interaction between intentionality and language on punishment ratings. Controls punished more when harm was described using gruesome language. However, that was not the case of people in the CJ group, who assigned the same amount of punishment in both conditions. In conclusion, participants with job experience in the criminal justice system or criminal law differ in the way they make moral decisions. Particularly, it seems that they are less sensitive to the biasing effect of gruesome evidence, which is probably explained by their formal education or their experience in dealing with such evidence. Nonetheless, more studies are needed to determine the impact this phenomenon has on the fulfillment of their duty.

Keywords: criminal justice system, emotions, gruesome descriptions, intentionality, moral decision-making

Procedia PDF Downloads 157
19 Red Dawn in the Desert: A World-Systems Analysis of the Maritime Silk Road Initiative

Authors: Toufic Sarieddine

Abstract:

The current debate on the hegemonic impact of China’s Belt and Road Initiative (BRI) is of two opposing strands: Resilient and absolute US hegemony on the one hand and various models of multipolar hegemony such as bifurcation on the other. Bifurcation theories illustrate an unprecedented division of hegemonic functions between China and the US, whereby Beijing becomes the world’s economic hegemon, leaving Washington the world’s military hegemon and security guarantor. While consensus points to China being the main driver of unipolarity’s rupturing, the debate among bifurcationists is on the location of the first rupture. In this regard, the Middle East and North Africa (MENA) region has seen increasing Chinese foreign direct investment in recent years while that to other regions has declined, ranking it second in 2018 as part of the financing for the Maritime Silk Road Initiative (MSRI). China has also become the top trade partner of 11 states in the MENA region, as well as its top source of machine imports, surpassing the US and achieving an overall trade surplus almost double that of Washington’s. These are among other features outlined in world-systems analysis (WSA) literature which correspond with the emergence of a new hegemon. WSA is further utilized to gauge other facets of China’s increasing involvement in MENA and assess whether bifurcation is unfolding therein. These features of hegemony include the adoption of China’s modi operandi, economic dominance in production, trade, and finance, military capacity, cultural hegemony in ideology, education, and language, and the promotion of a general interest around which to rally potential peripheries (MENA states in this case). China’s modi operandi has seen some adoption with regards to support against the United Nations Convention on the Law of the Sea, oil bonds denominated in the yuan, and financial institutions such as the Shanghai Gold Exchange enjoying increasing Arab patronage. However, recent elections in Qatar, as well as liberal reforms in Saudi Arabia, demonstrate Washington’s stronger normative influence. Meanwhile, Washington’s economic dominance is challenged by China’s sizable machine exports, increasing overall imports, and widening trade surplus, but retains some clout via dominant arms and transport exports, as well as free-trade deals across the region. Militarily, Washington bests Beijing’s arms exports, has a dominant and well-established presence in the region, and successfully blocked Beijing’s attempt to penetrate through the UAE. Culturally, Beijing enjoys higher favorability in Arab public opinion, and its broadcast networks have found some resonance with Arab audiences. In education, the West remains MENA students’ preferred destination. Further, while Mandarin has become increasingly available in schools across MENA, its usage and availability still lag far behind English. Finally, Beijing’s general interest in infrastructure provision and prioritizing economic development over social justice and democracy provides an avenue for increased incorporation between Beijing and the MENA region. The overall analysis shows solid progress towards bifurcation in MENA.

Keywords: belt and road initiative, hegemony, Middle East and North Africa, world-systems analysis

Procedia PDF Downloads 79