Search results for: interference mitigation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1329

Search results for: interference mitigation

249 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 84
248 Hydrodynamic and Water Quality Modelling to Support Alternative Fuels Maritime Operations Incident Planning & Impact Assessments

Authors: Chow Jeng Hei, Pavel Tkalich, Low Kai Sheng Bryan

Abstract:

Due to the growing demand for sustainability in the maritime industry, there has been a significant increase in focus on alternative fuels such as biofuels, liquefied natural gas (LNG), hydrogen, methanol and ammonia to reduce the carbon footprint of vessels. Alternative fuels offer efficient transportability and significantly reduce carbon dioxide emissions, a critical factor in combating global warming. In an era where the world is determined to tackle climate change, the utilization of methanol is projected to witness a consistent rise in demand, even during downturns in the oil and gas industry. Since 2022, there has been an increase in methanol loading and discharging operations for industrial use in Singapore. These operations were conducted across various storage tank terminals at Jurong Island of varying capacities, which are also used to store alternative fuels for bunkering requirements. The key objective of this research is to support the green shipping industries in the transformation to new fuels such as methanol and ammonia, especially in evolving the capability to inform risk assessment and management of spills. In the unlikely event of accidental spills, a highly reliable forecasting system must be in place to provide mitigation measures and ahead planning. The outcomes of this research would lead to an enhanced metocean prediction capability and, together with advanced sensing, will continuously build up a robust digital twin of the bunkering operating environment. Outputs from the developments will contribute to management strategies for alternative marine fuel spills, including best practices, safety challenges and crisis management. The outputs can also benefit key port operators and the various bunkering, petrochemicals, shipping, protection and indemnity, and emergency response sectors. The forecasted datasets provide a forecast of the expected atmosphere and hydrodynamic conditions prior to bunkering exercises, enabling a better understanding of the metocean conditions ahead and allowing for more refined spill incident management planning

Keywords: clean fuels, hydrodynamics, coastal engineering, impact assessments

Procedia PDF Downloads 51
247 Mild Auditory Perception and Cognitive Impairment in mid-Trimester Pregnancy

Authors: Tahamina Begum, Wan Nor Azlen Wan Mohamad, Faruque Reza, Wan Rosilawati Wan Rosli

Abstract:

To assess auditory perception and cognitive function during pregnancy is necessary as the pregnant women need extra effort for attention mainly for their executive function to maintain their quality of life. This study aimed to investigate neural correlates of cognitive and behavioral processing during mid trimester pregnancy. Event-Related Potentials (ERPs) were studied by using 128-sensor net and PAS or COWA (controlled Oral Word Association), WCST (Wisconsin Card Sorting Test), RAVLTIM (Rey Auditory Verbal and Learning Test: immediate or interference recall, delayed recall (RAVLT DR) and total score (RAVLT TS) were tested for neuropsychology assessment. In total 18 subjects were recruited (n= 9 in each group; control and pregnant group). All participants of the pregnant group were within 16-27 (mid trimester) weeks gestation. Age and education matched control healthy subjects were recruited in the control group. Participants were given a standardized test of auditory cognitive function as auditory oddball paradigm during ERP study. In this paradigm, two different auditory stimuli (standard and target stimuli) were used where subjects counted silently only target stimuli with giving attention by ignoring standard stimuli. Mean differences between target and standard stimuli were compared across groups. N100 (auditory sensory ERP component) and P300 (auditory cognitive ERP component) were recorded at T3, T4, T5, T6, Cz and Pz electrode sites. An equal number of electrodes showed non-significantly shorter amplitude of N100 component (except significantly shorter at T3, P= 0.05) and non-significant longer latencies (except significantly longer latency at T5, P= 0.008) of N100 component in pregnant group comparing control. In case of P300 component, maximum electrode sites showed non-significantly higher amplitudes and equal number of sites showed non-significant shorter latencies in pregnant group comparing control. Neuropsychology results revealed the non-significant higher score of PAS, lower score of WCST, lower score of RAVLTIM and RAVLTDR in pregnant group comparing control. The results of N100 component and RAVLT scores concluded that auditory perception is mildly impaired and P300 component proved very mild cognitive dysfunction with good executive functions in second trimester of pregnancy.

Keywords: auditory perception, pregnancy, stimuli, trimester

Procedia PDF Downloads 359
246 One Year Follow up of Head and Neck Paragangliomas: A Single Center Experience

Authors: Cecilia Moreira, Rita Paiva, Daniela Macedo, Leonor Ribeiro, Isabel Fernandes, Luis Costa

Abstract:

Background: Head and neck paragangliomas are a rare group of tumors with a large spectrum of clinical manifestations. The approach to evaluate and treat these lesions has evolved over the last years. Surgery was the standard for the approach of these patients, but nowadays new techniques of imaging and radiation therapy changed that paradigm. Despite advances in treating, the growth potential and clinical outcome of individual cases remain largely unpredictable. Objectives: Characterization of our institutional experience with clinical management of these tumors. Methods: This was a cross-sectional study of patients followed in our institution between 01 January and 31 December 2017 with paragangliomas of the head and neck and cranial base. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic workup, treatment modality, tumor control and recurrence, complications of treatment and hereditary status were collected and summarized. Results: A total of four female patients were followed between 01 January and 31 December 2017 in our institution. The mean age of our cohort was 53 (± 16.1) years. The primary locations were at the level of the tympanic jug (n=2, 50%) and carotid body (n=2, 50%), and only one of the tumors of the carotid body presented pulmonary metastasis at the time of diagnosis. None of the lesions were catecholamine-secreting. Two patients underwent genetic testing, with no mutations identified. The initial clinical presentation was variable highlighting the decrease of visual acuity and headache as symptoms present in all patients. In one of the cases, loss of all teeth of the lower jaw was the presenting symptomatology. Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were employed as treatment approaches according to anatomical location and resectability of lesions. As post-therapeutic sequels the persistence of tinnitus and disabling pain stands out, presenting one of the patients neuralgia of the glossopharyngeal. Currently, all patients are under regular surveillance with a median follow up of 10 months. Conclusion: Ultimately, clinical management of these tumors remains challenging owing to heterogeneity in clinical presentation, the existence of multiple treatment alternatives, and potential to cause serious detriment to critical functions and consequently interference with the quality of life of the patients.

Keywords: clinical outcomes, head and neck, management, paragangliomas

Procedia PDF Downloads 127
245 Study on Runoff Allocation Responsibilities of Different Land Uses in a Single Catchment Area

Authors: Chuan-Ming Tung, Jin-Cheng Fu, Chia-En Feng

Abstract:

In recent years, the rapid development of urban land in Taiwan has led to the constant increase of the areas of impervious surface, which has increased the risk of waterlogging during heavy rainfall. Therefore, in recent years, promoting runoff allocation responsibilities has often been used as a means of reducing regional flooding. In this study, the single catchment area covering both urban and rural land as the study area is discussed. Based on Storm Water Management Model, urban and rural land in a single catchment area was explored to develop the runoff allocation responsibilities according to their respective control regulation on land use. The impacts of runoff increment and reduction in sub-catchment area were studied to understand the impact of highly developed urban land on the reduction of flood risk of rural land at the back end. The results showed that the rainfall with 1 hour short delay of 2 years, 5 years, 10 years, and 25 years return period. If the study area was fully developed, the peak discharge at the outlet would increase by 24.46% -22.97% without runoff allocation responsibilities. The front-end urban land would increase runoff from back-end of rural land by 76.19% -46.51%. However, if runoff allocation responsibilities were carried out in the study area, the peak discharge could be reduced by 58.38-63.08%, which could make the front-end to reduce 54.05% -23.81% of the peak flow to the back-end. In addition, the researchers found that if it was seen from the perspective of runoff allocation responsibilities of per unit area, the residential area of urban land would benefit from the relevant laws and regulations of the urban system, which would have a better effect of reducing flood than the residential land in rural land. For rural land, the development scale of residential land was generally small, which made the effect of flood reduction better than that of industrial land. Agricultural land requires a large area of land, resulting in the lowest share of the flow per unit area. From the point of the planners, this study suggests that for the rural land around the city, its responsibility should be assigned to share the runoff. And setting up rain water storage facilities in the same way as urban land, can also take stock of agricultural land resources to increase the ridge of field for flood storage, in order to improve regional disaster reduction capacity and resilience.

Keywords: runoff allocation responsibilities, land use, flood mitigation, SWMM

Procedia PDF Downloads 86
244 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS

Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan

Abstract:

Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.

Keywords: bearing force, frictional force, finite element analysis, ANSYS

Procedia PDF Downloads 320
243 Effects of Cash Transfers Mitigation Impacts in the Face of Socioeconomic External Shocks: Evidence from Egypt

Authors: Basma Yassa

Abstract:

Evidence on cash transfers’ effectiveness in mitigating macro and idiosyncratic shocks’ impacts has been mixed and is mostly concentrated in Latin America, Sub-Saharan Africa, and South Asia with very limited evidence from the MENA region. Yet conditional cash transfers schemes have been continually used, especially in Egypt, as the main social protection tool in response to the recent socioeconomic crises and macro shocks. We use 2 panel datasets and 1 cross-sectional dataset to estimate the effectiveness of cash transfers as a shock-mitigative mechanism in the Egyptian context. In this paper, the results from the different models (Panel Fixed Effects model and the Regression Discontinuity Design (RDD) model) confirm that micro and macro shocks lead to significant decline in several household-level welfare outcomes and that Takaful cash transfers have a significant positive impact in mitigating the negative shock impacts, especially on households’ debt incidence, debt levels, and asset ownership, but not necessarily on food, and non-food expenditure levels. The results indicate large positive significant effects on decreasing household incidence of debt by up to 12.4 percent and lowered the debt size by approximately 18 percent among Takaful beneficiaries compared to non-beneficiaries’. Similar evidence is found on asset ownership levels, as the RDD model shows significant positive effects on total asset ownership and productive asset ownership, but the model failed to detect positive impacts on per capita food and non-food expenditures. Further extensions are still in progress to compare the models’ results with the DID model results when using a nationally representative ELMPS panel data (2018/2024) rounds. Finally, our initial analysis suggests that conditional cash transfers are effective in buffering the negative shock impacts on certain welfare indicators even after successive macro-economic shocks in 2022 and 2023 in the Egyptian Context.

Keywords: cash transfers, fixed effects, household welfare, household debt, micro shocks, regression discontinuity design

Procedia PDF Downloads 31
242 A Hebbian Neural Network Model of the Stroop Effect

Authors: Vadim Kulikov

Abstract:

The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.

Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop

Procedia PDF Downloads 249
241 A Microwave and Millimeter-Wave Transmit/Receive Switch Subsystem for Communication Systems

Authors: Donghyun Lee, Cam Nguyen

Abstract:

Multi-band systems offer a great deal of benefit in modern communication and radar systems. In particular, multi-band antenna-array radar systems with their extended frequency diversity provide numerous advantages in detection, identification, locating and tracking a wide range of targets, including enhanced detection coverage, accurate target location, reduced survey time and cost, increased resolution, improved reliability and target information. An accurate calibration is a critical issue in antenna array systems. The amplitude and phase errors in multi-band and multi-polarization antenna array transceivers result in inaccurate target detection, deteriorated resolution and reduced reliability. Furthermore, the digital beam former without the RF domain phase-shifting is less immune to unfiltered interference signals, which can lead to receiver saturation in array systems. Therefore, implementing integrated front-end architecture, which can support calibration function with low insertion and filtering function from the farthest end of an array transceiver is of great interest. We report a dual K/Ka-band T/R/Calibration switch module with quasi-elliptic dual-bandpass filtering function implementing a Q-enhanced metamaterial transmission line. A unique dual-band frequency response is incorporated in the reception and calibration path of the proposed switch module utilizing the composite right/left-handed meta material transmission line coupled with a Colpitts-style negative generation circuit. The fabricated fully integrated T/R/Calibration switch module in 0.18-μm BiCMOS technology exhibits insertion loss of 4.9-12.3 dB and isolation of more than 45 dB in the reception, transmission and calibration mode of operation. In the reception and calibration mode, the dual-band frequency response centered at 24.5 and 35 GHz exhibits out-of-band rejection of more than 30 dB compared to the pass bands below 10.5 GHz and above 59.5 GHz. The rejection between the pass bands reaches more than 50 dB. In all modes of operation, the IP1-dB is between 4 and 11 dBm. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: microwaves, millimeter waves, T/R switch, wireless communications, wireless communications

Procedia PDF Downloads 148
240 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 203
239 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLM’s (Large Language Models) is exciting, such models do have their downsides. LLM’s cannot easily expand or revise their memory, and they can’t straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.

Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies

Procedia PDF Downloads 75
238 Regulatory Measures on Effective Nuclear Security and Safeguards System in Nigeria

Authors: Nnodi Chinweikpe Akelachi, Adebayo Oladini Kachollom Ifeoma

Abstract:

Insecurity and the possession of nuclear weapons for non-peaceful purposes constitute a major threat to global peace and security, and this undermines the capacity for sustainable development. In Nigeria, the threat of terrorism is a challenge to national stability. For over a decade, Nigeria has been faced with insecurity ranging from Boko-Haram terrorist groups, kidnapping and banditry. The threat exhibited by this non-state actor poses a huge challenge to nuclear and radiological high risks facilities in Nigeria. This challenge has resulted in the regulatory authority and International stakeholders formulating policies for a good mitigation strategy. This strategy is enshrined in formulated laws, regulations and guides like the repealed Nuclear Safety and Radiation Protection Act 19 of 1995 (Nuclear safety, Physical Security and Safeguards Bill), the Nigerian Physical Protection of Nuclear Material and Nuclear Facilities, and Nigerian Nuclear Safeguards Regulations of 2021. All this will help Nigeria’s effort to meet its national nuclear security and safeguards obligations. To further enhance the implementation of nuclear security and safeguards system, Nigeria has signed the Non-Proliferation Treaty (NPT) in 1970, the Comprehensive Safeguards Agreement (INFCIRC/358) in 1988, Additional Protocol in 2007 as well as the Convention on Physical Protection of Nuclear Material and its amendment in 2005. In view of the evolving threats by non-state actors in Nigeria, physical protection security upgrades are being implemented in nuclear and all high-risk radiological facilities through the support of the United States Department of Energy (US-DOE). Also, the IAEA has helped strengthen nuclear security and safeguard systems through the provision of technical assistance and capacity development. Efforts are being made to address some of the challenges identified in the cause of implementing the measures for effective nuclear security and safeguards systems in Nigeria. However, there are eminent challenges in the implementation of the measures within the security and systems in Nigeria. These challenges need to be addressed for an effective security and safeguard regime in Nigeria. This paper seeks to address the challenges encountered in implementing the regulatory and stakeholder measures for effective security and safeguards regime in Nigeria, amongst others.

Keywords: nuclear regulatory body, nuclear facilities and activities, international stakeholders, security and safeguards measures

Procedia PDF Downloads 99
237 A Corpus-Based Study of Evaluative Language in Leading Articles in British Broadsheet and Tabloid Newspapers

Authors: Fatimah AlSaiari

Abstract:

In recent years, newspapers in the United Kingdom have been no longer just a means of sharing news about what happens in the world; they are also used to influence target readers by having them become more up-to-date, well-informed, entertained, exasperated, delighted, and infuriated. To achieve these objectives and maintain influence on public opinion, journalists use a particular language in which they can convey emotions and opinions, organize their discourse, and establish solidarity with their audience. This type of language has been widely analyzed under different labels, such as evaluation, appraisal, and stance. There is a considerable amount of linguistic and non-linguistic research devoted to analyzing this type of interpersonal language in journalistic discourse, and most of these studies were carried out to challenge the traditional assumptions of the objectivity and impartiality of news reporting. However, very little research has been undertaken on evaluative language in newspaper institutional editorials, and there is hardly any systematic or exhaustive analysis of this type of language in British tabloid and broadsheet newspapers. This study will attempt to provide new insights into the nature of authorial and non-authorial evaluation in leading articles in popular and quality British newspapers, along with their targets, sources, and discourse functions. The study will also attempt to develop a framework of evaluation that can be applied to evaluative lexical items in newspaper opinion texts. The framework is both theory-driven (i.e., it builds on and modifies previous frameworks of evaluation such as appraisal theory and parameter-based approach) and data-driven (i.e., it elicits the evaluative categories from the analysis of the corpus, which helps in the development of the current framework). To achieve this aim, a corpus of 140 leading articles were selected. The findings revealed that the tabloids tended to express their stance through explicitness, dramatization, frequent reference to social actors’ emotions and beliefs, and exaggeration in negativity, while the broadsheets preferred to express their stance through mitigation ambiguity and implicitness. conceptual themes and propositions were more preferable targets for expressing stance in the broadsheets while human behavior and characters were preferable targets for the tabloids.

Keywords: appraisal theory, evaluative language, British newspapers, broadsheets & tabloids, evaluative adjectives

Procedia PDF Downloads 274
236 Opportunities and Challenges in Midwifery Education: A Literature Review

Authors: Abeer M. Orabi

Abstract:

Midwives are being seen as a key factor in returning birth care to a normal physiologic process that is woman-centered. On the other hand, more needs to be done to increase access for every woman to professional midwifery care. Because of the nature of the midwifery specialty, the magnitude of the effect that can result from a lack of knowledge if midwives make a mistake in their care has the potential to affect a large number of the birthing population. So, the development, running, and management of midwifery educational programs should follow international standards and come after a thorough community needs assessment. At the same time, the number of accredited midwifery educational programs needs to be increased so that larger numbers of midwives will be educated and qualified, as well as access to skilled midwifery care will be increased. Indeed, the selection of promising midwives is important for the successful completion of an educational program, achievement of the program goals, and retention of graduates in the field. Further, the number of schooled midwives in midwifery education programs, their background, and their experience constitute some concerns in the higher education industry. Basically, preceptors and clinical sites are major contributors to the midwifery education process, as educational programs rely on them to provide clinical practice opportunities. In this regard, the selection of clinical training sites should be based on certain criteria to ensure their readiness for the intended training experiences. After that, communication, collaboration, and liaison between teaching faculty and field staff should be maintained. However, the shortage of clinical preceptors and the massive reduction in the number of practicing midwives, in addition to unmanageable workloads, act as significant barriers to midwifery education. Moreover, the medicalized approach inherent in the hospital setting makes it difficult to practice the midwifery model of care, such as watchful waiting, non-interference in normal processes, and judicious use of interventions. Furthermore, creating a motivating study environment is crucial for avoiding unnecessary withdrawal and retention in any educational program. It is well understood that research is an essential component of any profession for achieving its optimal goal and providing a foundation and evidence for its practices, and midwifery is no exception. Midwives have been playing an important role in generating their own research. However, the selection of novel, researchable, and sustainable topics considering community health needs is also a challenge. In conclusion, ongoing education and research are the lifeblood of the midwifery profession to offer a highly competent and qualified workforce. However, many challenges are being faced, and barriers are hindering their improvement.

Keywords: barriers, challenges, midwifery education, educational programs

Procedia PDF Downloads 99
235 Structural Analysis and Modelling in an Evolving Iron Ore Operation

Authors: Sameh Shahin, Nannang Arrys

Abstract:

Optimizing pit slope stability and reducing strip ratio of a mining operation are two key tasks in geotechnical engineering. With a growing demand for minerals and an increasing cost associated with extraction, companies are constantly re-evaluating the viability of mineral deposits and challenging their geological understanding. Within Rio Tinto Iron Ore, the Structural Geology (SG) team investigate and collect critical data, such as point based orientations, mapping and geological inferences from adjacent pits to re-model deposits where previous interpretations have failed to account for structurally controlled slope failures. Utilizing innovative data collection methods and data-driven investigation, SG aims to address the root causes of slope instability. Committing to a resource grid drill campaign as the primary source of data collection will often bias data collection to a specific orientation and significantly reduce the capability to identify and qualify complexity. Consequently, these limitations make it difficult to construct a realistic and coherent structural model that identifies adverse structural domains. Without the consideration of complexity and the capability of capturing these structural domains, mining operations run the risk of inadequately designed slopes that may fail and potentially harm people. Regional structural trends have been considered in conjunction with surface and in-pit mapping data to model multi-batter fold structures that were absent from previous iterations of the structural model. The risk is evident in newly identified dip-slope and rock-mass controlled sectors of the geotechnical design rather than a ubiquitous dip-slope sector across the pit. The reward is two-fold: 1) providing sectors of rock-mass controlled design in previously interpreted structurally controlled domains and 2) the opportunity to optimize the slope angle for mineral recovery and reduced strip ratio. Furthermore, a resulting high confidence model with structures and geometries that can account for historic slope instabilities in structurally controlled domains where design assumptions failed.

Keywords: structural geology, geotechnical design, optimization, slope stability, risk mitigation

Procedia PDF Downloads 22
234 Gear Fault Diagnosis Based on Optimal Morlet Wavelet Filter and Autocorrelation Enhancement

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Condition monitoring is used to increase machinery availability and machinery performance, whilst reducing consequential damage, increasing machine life, reducing spare parts inventories, and reducing breakdown maintenance. An efficient condition monitoring system provides early warning of faults by predicting them at an early stage. When a localized fault occurs in gears, the vibration signals always exhibit non-stationary behavior. The periodic impulsive feature of the vibration signal appears in the time domain and the corresponding gear mesh frequency (GMF) emerges in the frequency domain. However, one limitation of frequency-domain analysis is its inability to handle non-stationary waveform signals, which are very common when machinery faults occur. Particularly at the early stage of gear failure, the GMF contains very little energy and is often overwhelmed by noise and higher-level macro-structural vibrations. An effective signal processing method would be necessary to remove such corrupting noise and interference. In this paper, a new hybrid method based on optimal Morlet wavelet filter and autocorrelation enhancement is presented. First, to eliminate the frequency associated with interferential vibrations, the vibration signal is filtered with a band-pass filter determined by a Morlet wavelet whose parameters are selected or optimized based on maximum Kurtosis. Then, to further reduce the residual in-band noise and highlight the periodic impulsive feature, an autocorrelation enhancement algorithm is applied to the filtered signal. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers induce a load on the output joint shaft flanges. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. The gearbox used for experimental measurements is of the type most commonly used in modern small to mid-sized passenger cars with transversely mounted powertrain and front wheel drive: a five-speed gearbox with final drive gear and front wheel differential. The results obtained from practical experiments prove that the proposed method is very effective for gear fault diagnosis.

Keywords: wavelet analysis, pitted gear, autocorrelation, gear fault diagnosis

Procedia PDF Downloads 373
233 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe

Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou

Abstract:

In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.

Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine

Procedia PDF Downloads 47
232 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 374
231 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms

Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita

Abstract:

Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.

Keywords: air quality, internet of things, artificial intelligence, smart home

Procedia PDF Downloads 77
230 Detection of Glyphosate Using Disposable Sensors for Fast, Inexpensive and Reliable Measurements by Electrochemical Technique

Authors: Jafar S. Noori, Jan Romano-deGea, Maria Dimaki, John Mortensen, Winnie E. Svendsen

Abstract:

Pesticides have been intensively used in agriculture to control weeds, insects, fungi, and pest. One of the most commonly used pesticides is glyphosate. Glyphosate has the ability to attach to the soil colloids and degraded by the soil microorganisms. As glyphosate led to the appearance of resistant species, the pesticide was used more intensively. As a consequence of the heavy use of glyphosate, residues of this compound are increasingly observed in food and water. Recent studies reported a direct link between glyphosate and chronic effects such as teratogenic, tumorigenic and hepatorenal effects although the exposure was below the lowest regulatory limit. Today, pesticides are detected in water by complicated and costly manual procedures conducted by highly skilled personnel. It can take up to several days to get an answer regarding the pesticide content in water. An alternative to this demanding procedure is offered by electrochemical measuring techniques. Electrochemistry is an emerging technology that has the potential of identifying and quantifying several compounds in few minutes. It is currently not possible to detect glyphosate directly in water samples, and intensive research is underway to enable direct selective and quantitative detection of glyphosate in water. This study focuses on developing and modifying a sensor chip that has the ability to selectively measure glyphosate and minimize the signal interference from other compounds. The sensor is a silicon-based chip that is fabricated in a cleanroom facility with dimensions of 10×20 mm. The chip is comprised of a three-electrode configuration. The deposited electrodes consist of a 20 nm layer chromium and 200 nm gold. The working electrode is 4 mm in diameter. The working electrodes are modified by creating molecularly imprinted polymers (MIP) using electrodeposition technique that allows the chip to selectively measure glyphosate at low concentrations. The modification included using gold nanoparticles with a diameter of 10 nm functionalized with 4-aminothiophenol. This configuration allows the nanoparticles to bind to the working electrode surface and create the template for the glyphosate. The chip was modified using electrodeposition technique. An initial potential for the identification of glyphosate was estimated to be around -0.2 V. The developed sensor was used on 6 different concentrations and it was able to detect glyphosate down to 0.5 mgL⁻¹. This value is below the accepted pesticide limit of 0.7 mgL⁻¹ set by the US regulation. The current focus is to optimize the functionalizing procedure in order to achieve glyphosate detection at the EU regulatory limit of 0.1 µgL⁻¹. To the best of our knowledge, this is the first attempt to modify miniaturized sensor electrodes with functionalized nanoparticles for glyphosate detection.

Keywords: pesticides, glyphosate, rapid, detection, modified, sensor

Procedia PDF Downloads 164
229 A Multi-Role Oriented Collaboration Platform for Distributed Disaster Reduction in China

Authors: Linyao Qiu, Zhiqiang Du

Abstract:

As the rapid development of urbanization, economic developments, and steady population growth in China, the widespread devastation, economic damages, and loss of human lives caused by numerous forms of natural disasters are becoming increasingly serious every year. Disaster management requires available and effective cooperation of different roles and organizations in whole process including mitigation, preparedness, response and recovery. Due to the imbalance of regional development in China, the disaster management capabilities of national and provincial disaster reduction centers are uneven. When an undeveloped area suffers from disaster, neither local reduction department could get first-hand information like high-resolution remote sensing images from satellites and aircrafts independently, nor sharing mechanism is provided for the department to access to data resources deployed in other place directly. Most existing disaster management systems operate in a typical passive data-centric mode and work for single department, where resources cannot be fully shared. The impediment blocks local department and group from quick emergency response and decision-making. In this paper, we introduce a collaborative platform for distributed disaster reduction. To address the issues of imbalance of sharing data sources and technology in the process of disaster reduction, we propose a multi-role oriented collaboration business mechanism, which is capable of scheduling and allocating for optimum utilization of multiple resources, to link various roles for collaborative reduction business in different place. The platform fully considers the difference of equipment conditions in different provinces and provide several service modes to satisfy technology need in disaster reduction. An integrated collaboration system based on focusing services mechanism is designed and implemented for resource scheduling, functional integration, data processing, task management, collaborative mapping, and visualization. Actual applications illustrate that the platform can well support data sharing and business collaboration between national and provincial department. It could significantly improve the capability of disaster reduction in China.

Keywords: business collaboration, data sharing, distributed disaster reduction, focusing service

Procedia PDF Downloads 279
228 Seagrass Biomass Distribution in Mangrove Fringed Creeks of Gazi Bay, Kenya

Authors: Gabriel A. Juma, Adiel M. Magana, Githaiga N. Michael, James G. Kairo

Abstract:

Seagrass meadows are important carbon sinks, thus understanding this role and their conservation provides opportunities for their applications in climate change mitigation and adaptation. This study aimed at understanding seagrass contribution to ecosystem carbon at Gazi Bay; by comparing carbon stocks in seagrass beds of two mangroves fringed creeks of the bay. Specifically, the objectives included assessing the distribution and abundance of seagrass in the fringed creeks, and estimating above and below-ground biomass. Results obtained would be added to the mangrove and open bay carbon in estimating total ecosystem carbon of Gazi bay. The stratified random sampling strategy was applied in this study. Transects were laid perpendicular to the waterline at intervals of 50 meters from the upper region near the mangroves to the deeper end of the creek across seagrass meadows. Along these transects, 0.25m2 square quadrats were laid at 10 m to assess distribution and composition of seagrasses in the creeks. A total of 80 plots were sampled. Above-ground biomass was sampled by harvesting all the seagrass materials within the quadrat while four sediment cores were obtained from each quarter of the quadrat and then sorted into necromass, rhizomes and roots to determine below ground biomass. Samples were cleaned and dried in the oven for 72 hours at 60˚C in the laboratory. Total biomass was determined by multiplying biomass with carbon conversion factor of 0.34. In all the statistical tests, a significant level was set at α = 0.05. Eight species of seagrass were encountered in Western creek (WC) while seven in the Eastern creek (EC). Based on importance value, the dominant species in WC were Cymodocea rotundata and Halodule uninervis while Thalassodendron ciliatum and Enhalus acoroides dominated the eastern creek. The cover of seagrass in EC was 67.97% compared to 56.45% in WC. There was a significance difference in abundance of seagrass species between the two creeks (t = 1.97, D.F = 35, p < 0.05). Similarly, there was significance differences between total seagrass biomass (t= -8.44, D.F. = 53, p < 0.05) and species composition (F(7,79) = 14.6, p < 0.05) in the two creeks. Mean seagrass in the creeks was 7.25 ± 4.2 Mg C ha-1, (range: 4.1 - 12.9 Mg C ha-1). The findings of the current study reveal variations in biomass stocks of the two creeks of Gazi bay that have varying biophysical features. It is established that habitat heterogeneity between the creeks contributes to the variation in seagrass abundance and biomass stocking. This enhances understanding of these ecosystems hence the establishment of carbon offset project in seagrass for livelihood improvement and increased conservation.

Keywords: seagrass, above-ground, below-ground, creeks, Gazi bay

Procedia PDF Downloads 116
227 On-Chip Ku-Band Bandpass Filter with Compact Size and Wide Stopband

Authors: Jyh Sheen, Yang-Hung Cheng

Abstract:

This paper presents a design of a microstrip bandpass filter with a compact size and wide stopband by using 0.15-μm GaAs pHEMT process. The wide stop band is achieved by suppressing the first and second harmonic resonance frequencies. The slow-wave coupling stepped impedance resonator with cross coupled structure is adopted to design the bandpass filter. A two-resonator filter was fabricated with 13.5GHz center frequency and 11% bandwidth was achieved. The devices are simulated using the ADS design software. This device has shown a compact size and very low insertion loss of 2.6 dB. Microstrip planar bandpass filters have been widely adopted in various communication applications due to the attractive features of compact size and ease of fabricating. Various planar resonator structures have been suggested. In order to reach a wide stopband to reduce the interference outside the passing band, various designs of planar resonators have also been submitted to suppress the higher order harmonic frequencies of the designed center frequency. Various modifications to the traditional hairpin structure have been introduced to reduce large design area of hairpin designs. The stepped-impedance, slow-wave open-loop, and cross-coupled resonator structures have been studied to miniaturize the hairpin resonators. In this study, to suppress the spurious harmonic bands and further reduce the filter size, a modified hairpin-line bandpass filter with cross coupled structure is suggested by introducing the stepped impedance resonator design as well as the slow-wave open-loop resonator structure. In this way, very compact circuit size as well as very wide upper stopband can be achieved and realized in a Roger 4003C substrate. On the other hand, filters constructed with integrated circuit technology become more attractive for enabling the integration of the microwave system on a single chip (SOC). To examine the performance of this design structure at the integrated circuit, the filter is fabricated by the 0.15 μm pHEMT GaAs integrated circuit process. This pHEMT process can also provide a much better circuit performance for high frequency designs than those made on a PCB board. The design example was implemented in GaAs with center frequency at 13.5 GHz to examine the performance in higher frequency in detail. The occupied area is only about 1.09×0.97 mm2. The ADS software is used to design those modified filters to suppress the first and second harmonics.

Keywords: microstrip resonator, bandpass filter, harmonic suppression, GaAs

Procedia PDF Downloads 312
226 Influence of Microparticles in the Contact Region of Quartz Sand Grains: A Micro-Mechanical Experimental Study

Authors: Sathwik Sarvadevabhatla Kasyap, Kostas Senetakis

Abstract:

The mechanical behavior of geological materials is very complex, and this complexity is related to the discrete nature of soils and rocks. Characteristics of a material at the grain scale such as particle size and shape, surface roughness and morphology, and particle contact interface are critical to evaluate and better understand the behavior of discrete materials. This study investigates experimentally the micro-mechanical behavior of quartz sand grains with emphasis on the influence of the presence of microparticles in their contact region. The outputs of the study provide some fundamental insights on the contact mechanics behavior of artificially coated grains and can provide useful input parameters in the discrete element modeling (DEM) of soils. In nature, the contact interfaces between real soil grains are commonly observed with microparticles. This is usually the case of sand-silt and sand-clay mixtures, where the finer particles may create a coating on the surface of the coarser grains, altering in this way the micro-, and thus the macro-scale response of geological materials. In this study, the micro-mechanical behavior of Leighton Buzzard Sand (LBS) quartz grains, with interference of different microparticles at their contact interfaces is studied in the laboratory using an advanced custom-built inter-particle loading apparatus. Special techniques were adopted to develop the coating on the surfaces of the quartz sand grains so that to establish repeatability of the coating technique. The characterization of the microstructure of coated particles on their surfaces was based on element composition analyses, microscopic images, surface roughness measurements, and single particle crushing strength tests. The mechanical responses such as normal and tangential load – displacement behavior, tangential stiffness behavior, and normal contact behavior under cyclic loading were studied. The behavior of coated LBS particles is compared among different classes of them and with pure LBS (i.e. surface cleaned to remove any microparticles). The damage on the surface of the particles was analyzed using microscopic images. Extended displacements in both normal and tangential directions were observed for coated LBS particles due to the plastic nature of the coating material and this varied with the variation of the amount of coating. The tangential displacement required to reach steady state was delayed due to the presence of microparticles in the contact region of grains under shearing. Increased tangential loads and coefficient of friction were observed for the coated grains in comparison to the uncoated quartz grains.

Keywords: contact interface, microparticles, micro-mechanical behavior, quartz sand

Procedia PDF Downloads 180
225 The Language of Risk: Pregnancy and Childbirth in the COVID-19 Era

Authors: Sarah Holdren, Laura Crook, Anne Drapkin Lyerly

Abstract:

Objective: The COVID-19 Pandemic has drawn new attention to long-existing bioethical questions around pregnancy, childbirth, and parenthood. Due to the increased risk of severe COVID-19, pregnant individuals may experience anxiety regarding medical decision-making. Especially in the case of hospital births, questions around the ethics of bringing healthy pregnant individuals into a high-risk environment for viral transmission illuminate gaps in the American maternal and child healthcare system. Limited research has sought to understand the experiences of those who gave birth outside hospitals during this time. This study aims to understand pregnant individuals’ conceptualization of risk during the COVID-19 pandemic. Methods: Individuals who gave birth after March 2020 were recruited through advertisements on social media. Participants completed a 1-hour semi-structured interview and a demographic questionnaire. Interviews were transcribed and coded by members of the research team using thematic narrative analysis. Results: A total of 18 participants were interviewed and completed the demographic questionnaire. The language of risk was utilized in birth narratives in three different ways, which highlighted the multileveled and nuanced ways in which risk is understood and mitigated by pregnant and birthing individuals. These included: 1. The risk of contracting COVID-19 before, during, and after birth, 2. The risk of birth complications requiring medical interventions dependent on selected birthing space (home, birthing center, hospital), and 3. The overall risk of creating life in the middle of a pandemic. The risk of contracting COVID-19 and risk of birth complications were often weighed in paradoxical ways throughout each individual’s pregnancy, while phrases such as “pandemic baby” and “apocalypse” appeared throughout narratives and highlighted the broader implications of pregnancy and childbirth during this momentous time. Conclusions: Healthcare professionals should consider the variety of ways that pregnant and birthing individuals understand the risk when counseling patients on healthcare decisions, especially during times of healthcare crisis such as COVID-19. Future work should look to understand how the language of risk fits into a broader understanding of the human experience of growing life in times of crisis.

Keywords: maternal and child health, thematic narrative analysis, COVID-19, risk mitigation

Procedia PDF Downloads 152
224 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado

Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha

Abstract:

The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.

Keywords: low-order streams, metabolism, net primary production, trophic state

Procedia PDF Downloads 245
223 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 42
222 A Small-Scale Survey on Risk Factors of Musculoskeletal Disorders in Workers of Logistics Companies in Cyprus and on the Early Adoption of Industrial Exoskeletons as Mitigation Measure

Authors: Kyriacos Clerides, Panagiotis Herodotou, Constantina Polycarpou, Evagoras Xydas

Abstract:

Background: Musculoskeletal disorders (MSDs) in the workplace is a very common problem in Europe which are caused by multiple risk factors. In recent years, wearable devices and exoskeletons for the workplace have been trying to address the various risk factors that are associated with strenuous tasks in the workplace. The logistics sector is a huge sector that includes warehousing, storage, and transportation. However, the task associated with logistics is not well-studied in terms of MSDs risk. This study was aimed at looking into the MSDs affecting workers of logistics companies. It compares the prevalence of MSDs among workers and evaluates multiple risk factors that contribute to the development of MSDs. Moreover, this study seeks to obtain user feedback on the adoption of exoskeletons in such a work environment. Materials and Methods: The study was conducted among workers in logistics companies in Nicosia, Cyprus, from July to September 2022. A set of standardized questionnaires was used for collecting different types of data. Results: A high proportion of logistics professionals reported MSDs in one or more other body regions, the lower back being the most commonly affected area. Working in the same position for long periods, working in awkward postures, and handling an excessive load, were found to be the most commonly reported job risk factor that contributed to the development of MSDs, in this study. A significant number of participants consider the back region as the most to be benefited from a wearable exoskeleton device. Half of the participants would like to have at least a 50% reduction in their daily effort. The most important characteristics for the adoption of exoskeleton devices were found to be how comfortable the device is and its weight. Conclusion: Lower back and posture were the highest risk factors among all logistics professionals assessed in this study. A larger scale study using quantitative analytical tools may give a more accurate estimate of MSDs, which would pave the way for making more precise recommendations to eliminate the risk factors and thereby prevent MSDs. A follow-up study using exoskeletons in the workplace should be done to assess whether they assist in MSD prevention.

Keywords: musculoskeletal disorders, occupational health, safety, occupational risk, logistic companies, workers, Cyprus, industrial exoskeletons, wearable devices

Procedia PDF Downloads 85
221 Strategic Innovation of Nanotechnology: Novel Applications of Biomimetics and Microfluidics in Food Safety

Authors: Boce Zhang

Abstract:

Strategic innovation of nanotechnology to promote food safety has drawn tremendous attentions among research groups, which includes the need for research support during the implementation of the Food Safety Modernization Act (FSMA) in the United States. There are urgent demands and knowledge gaps to the understanding of a) food-water-bacteria interface as for how pathogens persist and transmit during food processing and storage; b) minimum processing requirement needed to prevent pathogen cross-contamination in the food system. These knowledge gaps are of critical importance to the food industry. However, the lack of knowledge is largely hindered by the limitations of research tools. Our groups recently endeavored two novel engineering systems with biomimetics and microfluidics as a holistic approach to hazard analysis and risk mitigation, which provided unprecedented research opportunities to study pathogen behavior, in particular, contamination, and cross-contamination, at the critical food-water-pathogen interface. First, biomimetically-patterned surfaces (BPS) were developed to replicate the identical surface topography and chemistry of a natural food surface. We demonstrated that BPS is a superior research tool that empowers the study of a) how pathogens persist through sanitizer treatment, b) how to apply fluidic shear-force and surface tension to increase the vulnerability of the bacterial cells, by detaching them from a protected area, etc. Secondly, microfluidic devices were designed and fabricated to study the bactericidal kinetics in the sub-second time frame (0.1~1 second). The sub-second kinetics is critical because the cross-contamination process, which includes detachment, migration, and reattachment, can occur in a very short timeframe. With this microfluidic device, we were able to simulate and study these sub-second cross-contamination scenarios, and to further investigate the minimum sanitizer concentration needed to sufficiently prevent pathogen cross-contamination during the food processing. We anticipate that the findings from these studies will provide critical insight on bacterial behavior at the food-water-cell interface, and the kinetics of bacterial inactivation from a broad range of sanitizers and processing conditions, thus facilitating the development and implementation of science-based food safety regulations and practices to mitigate the food safety risks.

Keywords: biomimetic materials, microbial food safety, microfluidic device, nanotechnology

Procedia PDF Downloads 347
220 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector

Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar

Abstract:

Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.

Keywords: appliances efficiency improvement, energy star, market penetration, residential sector

Procedia PDF Downloads 268