Search results for: enhancing magnetic lines
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4422

Search results for: enhancing magnetic lines

372 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 264
371 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 105
370 Effect of Toxic Metals Exposure on Rat Behavior and Brain Morphology: Arsenic, Manganese

Authors: Tamar Bikashvili, Tamar Lordkipanidze, Ilia Lazrishvili

Abstract:

Heavy metals remain one of serious environmental problems due to their toxic effects. The effect of arsenic and manganese compounds on rat behavior and neuromorphology was studied. Wistar rats were assigned to four groups: rats in control group were given regular water, while rats in other groups drank water with final manganese concentration of 10 mg/L (group A), 20 mg/L (group B) and final arsenic concentration 68 mg/L (group C), respectively, for a month. To study exploratory and anxiety behavior and also to evaluate aggressive performance in “home cage” rats were tested in “Open Field” and to estimate learning and memory status multi-branched maze was used. Statistically significant increase of motor and oriental-searching activity in experimental groups was revealed by an open field test, which was expressed in increase of number of lines crossed, rearing and hole reflexes. Obtained results indicated the suppression of fear in rats exposed to manganese. Specifically, this was estimated by the frequency of getting to the central part of the open field. Experiments revealed that 30-day exposure to 10 mg/ml manganese did not stimulate aggressive behavior in rats, while exposure to the higher dose (20 mg/ml), 37% of initially non-aggressive animals manifested aggressive behavior. Furthermore, 25% of rats were extremely aggressive. Obtained data support the hypothesis that excess manganese in the body is one of the immediate causes of enhancement of interspecific predatory aggressive and violent behavior in rats. It was also discovered that manganese intoxication produces non-reversible severe learning disability and insignificant, reversible memory disturbances. Studies of rodents exposed to arsenic also revealed changes in the learning process. As it is known, the distribution of metal ions differs in various brain regions. The principle manganese accumulation was observed in the hippocampus and in the neocortex, while arsenic was predominantly accumulated in nucleus accumbens, striatum, and cortex. These brain regions play an important role in the regulation of emotional state and motor activity. Histopathological analyzes of brain sections illustrated two morphologically distinct altered phenotypes of neurons: (1) shrunk cells with indications of apoptosis - nucleus and cytoplasm were very difficult to be distinguished, the integrity of neuronal cytoplasm was not disturbed; and (2) swollen cells - with indications of necrosis. Pyknotic nucleus, plasma membrane disruption and cytoplasmic vacuoles were observed in swollen neurons and they were surrounded by activated gliocytes. It’s worth to mention that in the cortex the majority of damaged neurons were apoptotic while in subcortical nuclei –neurons were mainly necrotic. Ultrastructural analyses demonstrated that all cell types in the cortex and the nucleus caudatus represent destructed mitochondria, widened neurons’ vacuolar system profiles, increased number of lysosomes and degeneration of axonal endings.

Keywords: arsenic, manganese, behavior, learning, neuron

Procedia PDF Downloads 338
369 Strategies for Incorporating Intercultural Intelligence into Higher Education

Authors: Hyoshin Kim

Abstract:

Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.

Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education

Procedia PDF Downloads 191
368 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 70
367 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission

Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan

Abstract:

As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.

Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster

Procedia PDF Downloads 176
366 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance

Authors: Xiaoyong He

Abstract:

The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.

Keywords: graphene, metamaterials, terahertz, tunable

Procedia PDF Downloads 326
365 Corrosion Analysis of Brazed Copper-Based Conducts in Particle Accelerator Water Cooling Circuits

Authors: A. T. Perez Fontenla, S. Sgobba, A. Bartkowska, Y. Askar, M. Dalemir Celuch, A. Newborough, M. Karppinen, H. Haalien, S. Deleval, S. Larcher, C. Charvet, L. Bruno, R. Trant

Abstract:

The present study investigates the corrosion behavior of copper (Cu) based conducts predominantly brazed with Sil-Fos (self-fluxing copper-based filler with silver and phosphorus) within various cooling circuits of demineralized water across different particle accelerator components at CERN. The study covers a range of sample service time, from a few months to fifty years, and includes various accelerator components such as quadrupoles, dipoles, and bending magnets. The investigation comprises the established sample extraction procedure, examination methodology including non-destructive testing, evaluation of the corrosion phenomena, and identification of commonalities across the studied components as well as analysis of the environmental influence. The systematic analysis included computed microtomography (CT) of the joints that revealed distributed defects across all brazing interfaces. Some defects appeared to result from areas not wetted by the filler during the brazing operation, displaying round shapes, while others exhibited irregular contours and radial alignment, indicative of a network or interconnection. The subsequent dry cutting performed facilitated access to the conduct's inner surface and the brazed joints for further inspection through light and electron microscopy (SEM) and chemical analysis via Energy Dispersive X-ray spectroscopy (EDS). Brazing analysis away from affected areas identified the expected phases for a Sil-Fos alloy. In contrast, the affected locations displayed micrometric cavities propagating into the material, along with selective corrosion of the bulk Cu initiated at the conductor-braze interface. Corrosion product analysis highlighted the consistent presence of sulfur (up to 6 % in weight), whose origin and role in the corrosion initiation and extension is being further investigated. The importance of this study is paramount as it plays a crucial role in comprehending the underlying factors contributing to recently identified water leaks and evaluating the extent of the issue. Its primary objective is to provide essential insights for the repair of impacted brazed joints when accessibility permits. Moreover, the study seeks to contribute to the improvement of design and manufacturing practices for future components, ultimately enhancing the overall reliability and performance of magnet systems within CERN accelerator facilities.

Keywords: accelerator facilities, brazed copper conducts, demineralized water, magnets

Procedia PDF Downloads 26
364 Effect of Human Use, Season and Habitat on Ungulate Densities in Kanha Tiger Reserve

Authors: Neha Awasthi, Ujjwal Kumar

Abstract:

Density of large carnivores is primarily dictated by the density of their prey. Therefore, optimal management of ungulates populations permits harbouring of viable large carnivore populations within protected areas. Ungulate density is likely to respond to regimes of protection and vegetation types. This has generated the need among conservation practitioners to obtain strata specific seasonal species densities for habitat management. Kanha Tiger Reserve (KTR) of 2074 km2 area comprises of two distinct management strata: The core (940 km2), devoid of human settlements and buffer (1134 km2) which is a multiple use area. In general, four habitat strata, grassland, sal forest, bamboo-mixed forest and miscellaneous forest are present in the reserve. Stratified sampling approach was used to access a) impact of human use and b) effect of habitat and season on ungulate densities. Since 2013 to 2016, ungulates were surveyed in winter and summer of each year with an effort of 1200 km walk in 200 spatial transects distributed throughout Kanha Tiger Reserve. We used a single detection function for each species within each habitat stratum for each season for estimating species specific seasonal density, using program DISTANCE. Our key results state that the core area had 4.8 times higher wild ungulate biomass compared with the buffer zone, highlighting the importance of undisturbed area. Chital was found to be most abundant, having a density of 30.1(SE 4.34)/km2 and contributing 33% of the biomass with a habitat preference for grassland. Unlike other ungulates, Gaur being mega herbivore, showed a major seasonal shift in density from bamboo-mixed and sal forest in summer to miscellaneous forest in winter. Maximum diversity and ungulate biomass were supported by grassland followed by bamboo-mixed habitat. Our study stresses the importance of inviolate core areas for achieving high wild ungulate densities and for maintaining populations of endangered and rare species. Grasslands accounts for 9% of the core area of KTR maintained in arrested stage of succession, therefore enhancing this habitat would maintain ungulate diversity, density and cater to the needs of only surviving population of the endangered barasingha and grassland specialist the blackbuck. We show the relevance of different habitat types for differential seasonal use by ungulates and attempt to interpret this in the context of nutrition and cover needs by wild ungulates. Management for an optimal habitat mosaic that maintains ungulate diversity and maximizes ungulate biomass is recommended.

Keywords: distance sampling, habitat management, ungulate biomass, diversity

Procedia PDF Downloads 283
363 The Impact of Sensory Overload on Students on the Autism Spectrum in Italian Inclusive Classrooms: Teachers' Perspectives and Training Needs

Authors: Paola Molteni, Luigi d’Alonzo

Abstract:

Background: Sensory issues are now considered one of the key aspects in defining and diagnosing autism, changing the perspectives on behavioural analysis and intervention in mainstream educational services. However, Italian teachers’ training is yet not specific on the topic of autism and its sensory-related effects and this research investigates the teacher’s capability in understanding the student’s needs and his/her challenging behaviours considering sensory perceptions. Objectives: The research aims to analyse mainstream schools teachers’ awareness on students’ sensory perceptions and how this affects classroom inclusion and learning process. The research questions are: i) Are teachers able to identify student’s sensory issues?; ii) Are trained teachers more able to identify sensory problems then untrained ones?; iii) What is the impact of sensory issues on inclusion in mainstream classrooms?; iv) What should teachers know about autistic sensory dimensions? Methods: This research was designed as a pilot study that involves a multi-methods approach, including action and collaborative research methodology. The designed research allows the researcher to catch the complexity of a province school district (from kindergarten to high school) through a deep detailed analysis of selected aspects. The researcher explored the questions described above through 133 questionnaires and 6 focus groups. The qualitative and quantitative data collected during the research were analysed using the Interpretative Phenomenological Analysis (IPA). Results: Mainstream schools teachers are not able to confidently recognise sensory issues of children included in the classroom. The research underlines: how professionals with no specific training on autism are not able to recognise sensory problems in students on the spectrum; how hearing and sight issues have higher impact on classroom inclusion and student’s learning process; how a lack of understanding is often followed by misinterpretations of the impact of sensory issues and challenging behaviours. Conclusions: As this research has shown, promoting and enhancing the importance of understanding sensory issues related to autism is fundamental to enable mainstream schools teachers to define educational and life-long plans able to properly answer the student’s needs and support his/her real inclusion in the classroom. This study is a good example of how the educational research can meet and help the daily practice in working with people on the autism spectrum and support the training design for mainstream school teachers: the emerging need of designed preparation on sensory issues is fundamental to be considered when planning school district in-service training programmes, specifically declined for inclusive services.

Keywords: autism spectrum condition, scholastic inclusion, sensory overload, teacher's training

Procedia PDF Downloads 299
362 Bridging Livelihood and Conservation: The Role of Ecotourism in the Campo Ma’an National Park, Cameroon

Authors: Gadinga Walter Forje, Martin Ngankam Tchamba, Nyong Princely Awazi, Barnabas Neba Nfornka

Abstract:

Ecotourism is viewed as a double edge sword for the enhancement of conservation and local livelihood within a protected landscape. The Campo Ma’an National Park (CMNP) adopted ecotourism in its management plan as a strategic axis for better management of the park. The growing importance of ecotourism as a strategy for the sustainable management of CMNP and its environs requires adequate information to bolster the sector. This study was carried out between November 2018 and September 2021, with the main objective to contribute to the sustainable management of the CMNP through suggestions for enhancing the capacity of ecotourism in and around the park. More specifically, the study aimed at; 1) Analyse the governance of ecotourism in the CMNP and its surrounding; 2) Assessing the impact of ecotourism on local livelihood around the CMNP; 3) Evaluating the contribution of ecotourism to biodiversity conservation in and around the CMNP; 4) Evaluate the determinants of ecotourism possibilities in achieving sustainable livelihood and biodiversity conservation in and around the CMNP. Data were collected from both primary and secondary sources. Primary data were obtained from household surveys (N=124), focus group discussions (N=8), and key informant interviews (N=16). Data collected were coded and imputed into SPSS (version 19.0) software and Microsoft Excel spreadsheet for both quantitative and qualitative analysis. Findings from the Chi-square test revealed overall poor ecotourism governance in and around the CMNP, with benefit sharing (X2 = 122.774, p <0.01) and conflict management (X2 = 90.839, p<0.01) viewed to be very poor. For the majority of the local population sampled, 65% think ecotourism does not contribute to local livelihood around CMNP. The main factors influencing the impact of ecotourism around the CMNP on the local population’s livelihood were gender (logistic regression (β) = 1.218; p = 0.000); and level of education (logistic regression (β) = 0.442; p = 0.000). Furthermore, 55.6% of the local population investigated believed ecotourism activities do not contribute to the biodiversity conservation of CMNP. Spearman correlation between socio-economic variables and ecotourism impact on biodiversity conservation indicated relationships with gender (r = 0.200, p = 0.032), main occupation (r = 0.300 p = 0.012), time spent in the community (r = 0.287 p = 0.017), and number of children (r =-0.286 p = 0.018). Variables affecting ecotourism impact on biodiversity conservation were age (logistic regression (β) = -0.683; p = 0.037) and gender (logistic regression (β) = 0.917; p = 0.045). This study recommends the development of ecotourism-friendly policies that can accelerate Public Private Partnership for the sustainable management of the CMNP as a commitment toward good governance. It also recommends the development of gender-sensitive ecotourism packages, with fair opportunities for rural women and more parity in benefit sharing to improve livelihood and contribute more to biodiversity conservation in and around the Park.

Keywords: biodiversity conservation, Campo Ma’an national park, ecotourism, ecotourism governance, rural livelihoods, protected area management

Procedia PDF Downloads 95
361 Enhancing Air Quality: Investigating Filter Lifespan and Byproducts in Air Purification Solutions

Authors: Freja Rydahl Rasmussen, Naja Villadsen, Stig Koust

Abstract:

Air purifiers have become widely implemented in a wide range of settings, including households, schools, institutions, and hospitals, as they tackle the pressing issue of indoor air pollution. With their ability to enhance indoor air quality and create healthier environments, air purifiers are particularly vital when ventilation options are limited. These devices incorporate a diverse array of technologies, including HEPA filters, active carbon filters, UV-C light, photocatalytic oxidation, and ionizers, each designed to combat specific pollutants and improve air quality within enclosed spaces. However, the safety of air purifiers has not been investigated thoroughly, and many questions still arise when applying them. Certain air purification technologies, such as UV-C light or ionization, can unintentionally generate undesirable byproducts that can negatively affect indoor air quality and health. It is well-established that these technologies can inadvertently generate nanoparticles or convert common gaseous compounds into harmful ones, thus exacerbating air pollution. However, the formation of byproducts can vary across products, necessitating further investigation. There is a particular concern about the formation of the carcinogenic substance formaldehyde from common gases like acetone. Many air purifiers use mechanical filtration to remove particles, dust, and pollen from the air. Filters need to be replaced periodically for optimal efficiency, resulting in an additional cost for end-users. Currently, there are no guidelines for filter lifespan, and replacement recommendations solely rely on manufacturers. A market screening revealed that manufacturers' recommended lifespans vary greatly (from 1 month to 10 years), and there is a need for general recommendations to guide consumers. Activated carbon filters are used to adsorb various types of chemicals that can pose health risks or cause unwanted odors. These filters have a certain capacity before becoming saturated. If not replaced in a timely manner, the adsorbed substances are likely to be released from the filter through off-gassing or losing adsorption efficiency. The goal of this study is to investigate the lifespan of filters as well as investigate the potentially harmful effects of air purifiers. Understanding the lifespan of filters used in air purifiers and the potential formation of harmful byproducts is essential for ensuring their optimal performance, guiding consumers in their purchasing decisions, and establishing industry standards for safer and more effective air purification solutions. At this time, a selection of air purifiers has been chosen, and test methods have been established. In the following 3 months, the tests will be conducted, and the results will be ready for presentation later.

Keywords: air purifiers, activated carbon filters, byproducts, clean air, indoor air quality

Procedia PDF Downloads 49
360 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete

Authors: D. Falliano, G. Ricciardi, E. Gugliandolo

Abstract:

Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.

Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.

Procedia PDF Downloads 155
359 As a Secure Bridge Country about Oil and Gas Sources Transfer after Arab Spring: Turkey

Authors: Fatih Ercin Guney, Hami Karagol

Abstract:

Day by day, humanity's energy needs increase, to facilitate access to energy sources by energy importing countries is of great importance in terms of issues both in terms of economic security and political security. The geographical location of the oil exporting countries in the Middle East (Iran, Iraq, Kuwait, Libya, Saudi Arabia, United Arab Emirates, Qatar) today, it is observed that evaluated by emerging Arab Spring(from Tunisia to Egypt) and freedom battles(in Syria) with security issues arise sourced from terrorist activities(ISIS). Progresses related with limited natural resources, energy and it's transportation issues which worries the developing countries, the energy in the region is considered to how to transfer safely. North Region of the Black Sea , the beginning of the conflict in the regional nature formed between Russia and Ukraine (2010), followed by the relevant regions of the power transmission line (From Russia to Europe) the discovery is considered to be the east's hand began to strengthen in terms of both the economical and political sides. With the growing need for safe access to the west of the new energy transmission lines are followed by Turkey, re-interest is considered to be shifted to the Mediterranean and the Middle East by West. Also, Russia, Iran and China (three axis of east) are generally performing as carry out parallel policies about energy , economical side and security in both United Nations Security Council (Two of Five Permanent Members are Russia and China) and Shanghai Cooperation Organization. In addition, Eastern Mediterranean Region Tension are rapidly increasing about research new oil and natural gas sources by Israel, Egypt, Cyprus, Lebanon. This paper provides, new energy corridor(s) are needed to transfer sources (Oil&Natural Gas) by Europe from East to West. So The West needs either safe bridge country to transfer natural sources to Europe in region or is needed to discovery new natural sources in extraterritorial waters of Eastern Mediterranean Region. But in two opportunities are evaluated with secure transfer corridors form region to Europe in safely. Even if the natural sources can be discovered, they are considered to transfer in safe manner. This paper involved, Turkey’s importance as a leader country in region over both of political and safe energy transfer sides as bridge country between south and north of Turkey why natural sources shall be transferred over Turkey, Even if diplomatic issues-For Example; Cyprus membership in European Union, Turkey membership candidate duration, Israel-Cyprus- Egypt-Lebanon researches about new natural sources in Mediterranean - occurred. But politic balance in Middle-East is changing quickly because of lack of democratic governments in region. So it is evaluated that the alliance of natural sources researches may not be long-time relations due to share sources after discoveries. After evaluating over causes and reasons, aim to reach finding foresight about future of region for energy transfer periods in secure manner.

Keywords: Middle East, natural gas, oil, Turkey

Procedia PDF Downloads 280
358 Agrowastes to Edible Hydrogels through Bio Nanotechnology Interventions: Bioactive from Mandarin Peels

Authors: Niharika Kaushal, Minni Singh

Abstract:

Citrus fruits contain an abundance of phytochemicals that can promote health. A substantial amount of agrowaste is produced from the juice processing industries, primarily peels and seeds. This leftover agrowaste is a reservoir of nutraceuticals, particularly bioflavonoids which render it antioxidant and potentially anticancerous. It is, therefore, favorable to utilize this biomass and contribute towards sustainability in a manner that value-added products may be derived from them, nutraceuticals, in this study. However, the pre-systemic metabolism of flavonoids in the gastric phase limits the effectiveness of these bioflavonoids derived from mandarin biomass. In this study, ‘kinnow’ mandarin (Citrus nobilis X Citrus deliciosa) biomass was explored for its flavonoid profile. This work entails supercritical fluid extraction and identification of bioflavonoids from mandarin biomass. Furthermore, to overcome the limitations of these flavonoids in the gastrointestinal tract, a double-layered vehicular mechanism comprising the fabrication of nanoconjugates and edible hydrogels was adopted. Total flavonoids in the mandarin peel extract were estimated by the aluminum chloride complexation method and were found to be 47.3±1.06 mg/ml rutin equivalents as total flavonoids. Mass spectral analysis revealed the abundance of polymethoxyflavones (PMFs), nobiletin and tangeretin as the major flavonoids in the extract, followed by hesperetin and naringenin. Furthermore, the antioxidant potential was analyzed by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) method, which showed an IC50 of 0.55μg/ml. Nanoconjugates were fabricated via the solvent evaporation method, which was further impregnated into hydrogels. Additionally, the release characteristics of nanoconjugate-laden hydrogels in a simulated gastrointestinal environment were studied. The PLGA-PMFs nanoconjugates exhibited a particle size between 200-250nm having a smooth and spherical shape as revealed by FE-SEM. The impregnated alginate hydrogels offered a dense network that ensured the holding of PLGA-PMF nanoconjugates, as confirmed by Cryo-SEM images. Rheological studies revealed the shear-thinning behavior of hydrogels and their high resistance to deformation. Gastrointestinal studies showed a negligible 4.0% release of flavonoids in the gastric phase, followed by a sustained release over the next hours in the intestinal environment. Therefore, based on the enormous potential of recovering nutraceuticals from agro-processing wastes, further augmented by nanotechnological interventions for enhancing the bioefficacy of these compounds, lays the foundation for exploring the path towards the development of value-added products, thereby contributing towards the sustainable use of agrowaste.

Keywords: agrowaste, gastrointestinal, hydrogel, nutraceuticals

Procedia PDF Downloads 73
357 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.

Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.

Procedia PDF Downloads 105
356 Elements of Creativity and Innovation

Authors: Fadwa Al Bawardi

Abstract:

In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.

Keywords: innovation, creativity, factors, tools

Procedia PDF Downloads 36
355 Identification of Clinical Characteristics from Persistent Homology Applied to Tumor Imaging

Authors: Eashwar V. Somasundaram, Raoul R. Wadhwa, Jacob G. Scott

Abstract:

The use of radiomics in measuring geometric properties of tumor images such as size, surface area, and volume has been invaluable in assessing cancer diagnosis, treatment, and prognosis. In addition to analyzing geometric properties, radiomics would benefit from measuring topological properties using persistent homology. Intuitively, features uncovered by persistent homology may correlate to tumor structural features. One example is necrotic cavities (corresponding to 2D topological features), which are markers of very aggressive tumors. We develop a data pipeline in R that clusters tumors images based on persistent homology is used to identify meaningful clinical distinctions between tumors and possibly new relationships not captured by established clinical categorizations. A preliminary analysis was performed on 16 Magnetic Resonance Imaging (MRI) breast tissue segments downloaded from the 'Investigation of Serial Studies to Predict Your Therapeutic Response with Imaging and Molecular Analysis' (I-SPY TRIAL or ISPY1) collection in The Cancer Imaging Archive. Each segment represents a patient’s breast tumor prior to treatment. The ISPY1 dataset also provided the estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) status data. A persistent homology matrix up to 2-dimensional features was calculated for each of the MRI segmentation. Wasserstein distances were then calculated between all pairwise tumor image persistent homology matrices to create a distance matrix for each feature dimension. Since Wasserstein distances were calculated for 0, 1, and 2-dimensional features, three hierarchal clusters were constructed. The adjusted Rand Index was used to see how well the clusters corresponded to the ER/PR/HER2 status of the tumors. Triple-negative cancers (negative status for all three receptors) significantly clustered together in the 2-dimensional features dendrogram (Adjusted Rand Index of .35, p = .031). It is known that having a triple-negative breast tumor is associated with aggressive tumor growth and poor prognosis when compared to non-triple negative breast tumors. The aggressive tumor growth associated with triple-negative tumors may have a unique structure in an MRI segmentation, which persistent homology is able to identify. This preliminary analysis shows promising results in the use of persistent homology on tumor imaging to assess the severity of breast tumors. The next step is to apply this pipeline to other tumor segment images from The Cancer Imaging Archive at different sites such as the lung, kidney, and brain. In addition, whether other clinical parameters, such as overall survival, tumor stage, and tumor genotype data are captured well in persistent homology clusters will be assessed. If analyzing tumor MRI segments using persistent homology consistently identifies clinical relationships, this could enable clinicians to use persistent homology data as a noninvasive way to inform clinical decision making in oncology.

Keywords: cancer biology, oncology, persistent homology, radiomics, topological data analysis, tumor imaging

Procedia PDF Downloads 111
354 Concentration of Droplets in a Transient Gas Flow

Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal

Abstract:

The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.

Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations

Procedia PDF Downloads 236
353 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear

Authors: Grant Bifolchi

Abstract:

As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.

Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology

Procedia PDF Downloads 78
352 The Hague Abduction Convention and the Egyptian Position: Strategizing for a Law Reform

Authors: Abdalla Ahmed Abdrabou Emam Eldeib

Abstract:

For more than a century, the Hague Conference has tackled issues in the most challenging areas of private international law, including family law. Its actions in the realm of international child abduction have been remarkable in two ways during the last two decades. First, on October 25, 1980, the Hague Convention on the Civil Aspects of International Child Abduction (the Convention) was promulgated as an unusually inventive and powerful tool. Second, the Convention is rapidly becoming more prominent in the development of international child law. By that time, overseas travel had grown more convenient, and more couples were marrying or travelling across national lines. At the same time, parental separation and divorce have increased, leading to an increase in international child custody battles. The convention they drafted avoids legal quagmires and addresses extra-legal issues well. It literally restores the kid to its place of usual residence by establishing that the youngster was unlawfully abducted from that position or, alternatively, was wrongfully kept abroad after an allowed visit. Legal custody of a child of a contested parent is usually followed by the child's abduction or unlawful relocation to another country by the non-custodial parent or other persons. If a child's custodial parent lives outside of Egypt, the youngster may be kidnapped and brought to Egypt. It's natural to ask what laws should apply and what legal norms should be followed while hearing individual instances. This study comprehensively evaluates and estimates the relevant Hague Child Abduction Convention and the current situation in Egypt and which law is applicable for child custody. In addition, this research emphasis, detail, and focus on the position of Cross-border parental child abductions in Egypt. Moreover, examine the Islamic law compared to the Hague Convention on Child Custody in detail, as well as mentioning the treatment of Islamic countries in this matter in general and Egypt's treatment of this matter in particular, as well as the criticism directed at Egypt regarding the application and implementation of child custody issues. The present research backs up this method by using non-doctrinal techniques, including surveys, interviews, and dialogues. An important objective of this research is to examine the factors that contribute to parental child abduction. In this case, family court attorneys and other interested parties serve as the target audience from whom data is collected. A survey questionnaire was developed and sent to the target population in order to collect data for future empirical testing to validate the identified critical factors on Parental Child Abduction. The main finding in this study is breaking the reservations of many Muslim countries to join the Hague Convention with regard to child custody., Likewise, clarify the problems of implementation in practice in cases of kidnapping a child from one of the parents and traveling with him outside the borders of the country. Finally, this study is to provide suggestions for reforming the current Egyptian Family Law to make it an effective and efficient for all dispute's resolution mechanism and the possibility of joining The Hague Convention.

Keywords: egyptian family law, Hague child abduction convention, child custody, cross-border parental child abductions in egypt

Procedia PDF Downloads 48
351 An Exploration of Possible Impact of Drumming on Mental Health in a Hospital Setting

Authors: Zhao Luqian, Wang Yafei

Abstract:

Participation in music activities is beneficial for enhancing wellbeing, especially for aged people (Creech, 2013). Looking at percussion group in particular, it can facilitate a sense of belonging, relaxation, energy, and productivity, learning, enhanced mood, humanising, seems of accomplishment, escape from trauma, and emotional expression (Newman, 2015). In health literatures, group drumming is effective in reducing stress and improving multiple domains of social-motional behaviors (Ho et al., 2011; Maschi et al., 2010) because it offers a creative and mutual learning space that allows patients to establish a positive peer interaction (Mungas et al., 2014; Perkins, 2016). However, very few studies have investigated the effect of group drumming from the aspect of patients’ needs. Therefore, this study focuses on the discussion of patients' specific needs within mental health and explores how group percussion may meet their needs. Seligman’s (2011) five core elements of mental health were applied as patients’ needs in this study: (1) Positive emotions; (2) Engagement; (3) Relationships; (4) Meaning and (5) Accomplishment. 12 participants aged 57- 80 years were interviewed individually. The researcher also had observation in four drumming groups simultaneously. The results reveal that group drumming could improve participants’ mental wellbeing. First, it created a therapeutic health care environment extending beyond the elimination of boredom, and patients could focus on positive emotions during the session of group drumming. Secondly, it was effective in satisfying patients’ level of engagement. Thirdly, this study found that joining a percussion group would require patients to work on skills such as turn-taking and sharing. This equal relationship is helpful for releasing patients’ negative mood and thus forming tighter relationships between and among them. Fourthly, group drumming was found to meet patients’ meaning needs through offering them a place of belonging and a place for sharing. Its leaner-oriented approach engaged patients by a sense of belonging, accepting, connecting, and ownership. Finally, group drumming could meet patients’ needs for accomplishment through the learning process. The inclusive learning process, which indicates there is no right or wrong throughout the process, allowed patients to make their own decisions. In conclusion, it is difficult for patients to achieve positive emotions, engagement, relationships, meanings, and accomplishments in a hospital setting. Drumming can be practiced for enhancement in terms of reducing patients’ negative emotions and improving their experiences in a hospital through enriched social interaction and sense of accomplishment. Also, it can help patients to enhance social skills in a controlled environment.

Keywords: group drumming, hospital, mental health, music psychology

Procedia PDF Downloads 74
350 Food for Health: Understanding the Importance of Food Safety in the Context of Food Security

Authors: Carmen J. Savelli, Romy Conzade

Abstract:

Background: Access to sufficient amounts of safe and nutritious food is a basic human necessity, required to sustain life and promote good health. Food safety and food security are therefore inextricably linked, yet the importance of food safety in this relationship is often overlooked. Methodologies: A literature review and desk study were conducted to examine existing frameworks for discussing food security, especially from an international perspective, to determine the entry points for enhancing considerations for food safety in national and international policies. Major Findings: Food security is commonly understood as the state when all people at all times have physical, social and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life. Conceptually, food security is built upon four pillars including food availability, access, utilization and stability. Within this framework, the safety of food is often wrongly assumed as a given. However, in places where food supplies are insufficient, coping mechanisms for food insecurity are primarily focused on access to food without considerations for ensuring safety. Under such conditions, hygiene and nutrition are often ignored as people shift to less nutritious diets and consume more potentially unsafe foods, in which chemical, microbiological, zoonotic and other hazards can pose serious, acute and chronic health risks. While food supplies might be safe and nutritious, if consumed in quantities insufficient to support normal growth, health and activity, the result is hunger and famine. Recent estimates indicate that at least 842 million people, or roughly one in eight, still suffer from chronic hunger. Even if people eat enough food that is safe, they will become malnourished if the food does not provide the proper amounts of micronutrients and/or macronutrients to meet daily nutritional requirements, resulting in under- or over-nutrition. Two billion people suffer from one or more micronutrient deficiencies and over half a billion adults are obese. Access to sufficient amounts of nutritious food is not enough. If food is unsafe, whether arising from poor quality supplies or inadequate treatment and preparation, it increases the risk of foodborne infections such as diarrhoea. 70% of diarrhoea episodes occurring annually in children under five are due to biologically contaminated food. Conclusions: An integrated approach is needed where food safety and nutrition are systematically introduced into mainstream food system policies and interventions worldwide in order to achieve health and development goals. A new framework, “Food for Health” is proposed to guide policy development and requires all three aspects of food security to be addressed in balance: sufficiency, nutrition and safety.

Keywords: food safety, food security, nutrition, policy

Procedia PDF Downloads 392
349 Service Quality, Skier Satisfaction, and Behavioral Intentions in Leisure Skiing: The Case of Beijing

Authors: Shunhong Qi, Hui Tian

Abstract:

Triggered off by the forthcoming 2022 Winter Olympics, ski centers are blossoming in China, the number being 742 in 2018. Although the number of skier visits of ski resorts soared to 19.7 million in 2018, one-time skiers account for a considerable portion therein. In light of the extremely low return rates and skiing penetration level (0.5%) of leisure skiing in China, this study proposes and tests a leisure ski service performance framework which assesses the ski resorts’ service quality, skier satisfaction, as well as their impact on skiers’ behavioral intentions, with an aim to assess the success of ski resorts and provide suggestions for improvement. Three self-administered surveys and 16 interviews were conducted upon a convenience sample of leisure skiers in two major ski destinations within two hours’ drive from Beijing – Nanshan and Jundushan ski resorts. Of the 680 questionnaires distributed, 416 usable copies were returned, the response rate being 61.2%. The questionnaire used for the study was developed based on the existing literature of 'push' factors of skiers (intrinsic desire) and 'pull' factors (attractiveness of a destination), as well as leisure sport satisfaction. The scale comprises four parts: skiers’ demographic profiles, their perceived service quality (including ski resorts’ infrastructure, expense, safety and comfort, convenience, daily needs support, skill development support, and accessibility), their overall levels of satisfaction (satisfaction with the service and the experience), and their behavioral intentions (including loyalty, future visitation and greater tolerance of price increases). Skiers’ demographic profiles show that among the 220 males and 196 females in the survey, a vast majority of the skiers are age 17-39 (87.2%). 64.7% are not married, and nearly half (48.3%) of the skiers have a monthly family income exceeding 10,000 yuan (USD 1,424), and 80% are beginners or intermediate skiers. The regression examining the influence of service quality on skier satisfaction reveals that service quality accounts for 44.4% of the variance in skier satisfaction, the variables of safety and comfort, expense, skill development support, and accessibility contributing significantly in descending order. Another regression analyzing the influence of service quality as well as skier satisfaction on their behavioral intentions shows that service quality and skier satisfaction account for 39.1% of the variance in skiers’ behavioral intentions, and the significant predictors are skier satisfaction, safety and comfort, expense, and accessibility, in descending order, though a comparison between groups also indicates that for expert skiers, the significant variables are skier satisfaction, skill development support, safety, and comfort. Suggestions are thus made for ski resorts and other stakeholders to improve skier satisfaction and increase visitation: developing diversified ski courses to meet the demands of skiers of different skiing skills and to reduce crowding, adopting enough chairlifts and magic carpets, reinforcing safety measures and medical force; further exploring their various resources and lower the skiing expense on ski pass, equipment renting, accommodation and dining; adding more bus lines and/or develop platforms for skiers’ car-pooling, and offering diversified skiing activities with local flavors for better entertainment.

Keywords: behavioral intentions, leisure skiing, service quality, skier satisfaction

Procedia PDF Downloads 75
348 Enhancing AI for Global Impact: Conversations on Improvement and Societal Benefits

Authors: C. P. Chukwuka, E. V. Chukwuka, F. Ukwadi

Abstract:

This paper focuses on the advancement and societal impact of artificial intelligence (AI) systems. It explores the need for a theoretical framework in corporate governance, specifically in the context of 'hybrid' companies that have a mix of private and government ownership. The paper emphasizes the potential of AI to address challenges faced by these companies and highlights the importance of the less-explored state model in corporate governance. The aim of this research is to enhance AI systems for global impact and positive societal outcomes. It aims to explore the role of AI in refining corporate governance in hybrid companies and uncover nuanced insights into complex ownership structures. The methodology involves leveraging the capabilities of AI to address the challenges faced by hybrid companies in corporate governance. The researchers will analyze existing theoretical frameworks in corporate governance and integrate AI systems to improve problem-solving and understanding of intricate systems. The paper suggests that improved AI systems have the potential to shape a more informed and responsible corporate landscape. AI can uncover nuanced insights and navigate complex ownership structures in hybrid companies, leading to greater efficacy and positive societal outcomes. The theoretical importance of this research lies in the exploration of the role of AI in corporate governance, particularly in the context of hybrid companies. By integrating AI systems, the paper highlights the potential for improved problem-solving and understanding of intricate systems, contributing to a more informed and responsible corporate landscape. The data for this research will be collected from existing literature on corporate governance, specifically focusing on hybrid companies. Additionally, data on AI capabilities and their application in corporate governance will be collected. The collected data will be analyzed through a systematic review of existing theoretical frameworks in corporate governance. The researchers will also analyze the capabilities of AI systems and their potential application in addressing the challenges faced by hybrid companies. The findings will be synthesized and compared to identify patterns and potential improvements. The research concludes that AI systems have the potential to enhance corporate governance in hybrid companies, leading to greater efficacy and positive societal outcomes. By leveraging AI capabilities, nuanced insights can be uncovered, and complex ownership structures can be navigated, shaping a more informed and responsible corporate landscape. The findings highlight the importance of integrating AI in refining problem-solving and understanding intricate systems for global impact.

Keywords: advancement, artificial intelligence, challenges, societal impact

Procedia PDF Downloads 36
347 Measuring the Impact of Implementing an Effective Practice Skills Training Model in Youth Detention

Authors: Phillipa Evans, Christopher Trotter

Abstract:

Aims: This study aims to examine the effectiveness of a practice skills framework implemented in three youth detention centres in Juvenile Justice in New South Wales (NSW), Australia. The study is supported by a grant from and Australian Research Council and NSW Juvenile Justice. Recent years have seen a number of incidents in youth detention centres in Australia and other places. These have led to inquiries and reviews with some suggesting that detention centres often do not even meet basic human rights and do little in terms of providing opportunities for rehabilitation of residents. While there is an increasing body of research suggesting that community based supervision can be effective in reducing recidivism if appropriate skills are used by supervisors, there has been less work considering worker skills in youth detention settings. The research that has been done, however, suggest that teaching interpersonal skills to youth officers may be effective in enhancing the rehabilitation culture of centres. Positive outcomes have been seen in a UK detention centre for example, from teaching staff to do five-minute problem-solving interventions. The aim of this project is to examine the effectiveness of training and coaching youth detention staff in three NSW detention centres in interpersonal practice skills. Effectiveness is defined in terms of reductions in the frequency of critical incidents and improvements in the well-being of staff and young people. The research is important as the results may lead to the development of more humane and rehabilitative experiences for young people. Method: The study involves training staff in core effective practice skills and supporting staff in the use of those skills through supervision and de-briefing. The core effective practice skills include role clarification, pro-social modelling, brief problem solving, and relationship skills. The training also addresses some of the background to criminal behaviour including trauma. Data regarding critical incidents and well-being before and after the program implementation are being collected. This involves interviews with staff and young people, the completion of well-being scales, and examination of departmental records regarding critical incidents. In addition to the before and after comparison a matched control group which is not offered the intervention is also being used. The study includes more than 400 young people and 100 youth officers across 6 centres including the control sites. Data collection includes interviews with workers and young people, critical incident data such as assaults, use of lock ups and confinement and school attendance. Data collection also includes analysing video-tapes of centre activities for changes in the use of staff skills. Results: The project is currently underway with ongoing training and supervision. Early results will be available for the conference.

Keywords: custody, practice skills, training, youth workers

Procedia PDF Downloads 80
346 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports

Authors: Jonardan Koner, Avinash Purandare

Abstract:

In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.

Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders

Procedia PDF Downloads 114
345 Curcumin Nanomedicine: A Breakthrough Approach for Enhanced Lung Cancer Therapy

Authors: Shiva Shakori Poshteh

Abstract:

Lung cancer is a highly prevalent and devastating disease, representing a significant global health concern with profound implications for healthcare systems and society. Its high incidence, mortality rates, and late-stage diagnosis contribute to its formidable nature. To address these challenges, nanoparticle-based drug delivery has emerged as a promising therapeutic strategy. Curcumin (CUR), a natural compound derived from turmeric, has garnered attention as a potential nanomedicine for lung cancer treatment. Nanoparticle formulations of CUR offer several advantages, including improved drug delivery efficiency, enhanced stability, controlled release kinetics, and targeted delivery to lung cancer cells. CUR exhibits a diverse array of effects on cancer cells. It induces apoptosis by upregulating pro-apoptotic proteins, such as Bax and Bak, and downregulating anti-apoptotic proteins, such as Bcl-2. Additionally, CUR inhibits cell proliferation by modulating key signaling pathways involved in cancer progression. It suppresses the PI3K/Akt pathway, crucial for cell survival and growth, and attenuates the mTOR pathway, which regulates protein synthesis and cell proliferation. CUR also interferes with the MAPK pathway, which controls cell proliferation and survival, and modulates the Wnt/β-catenin pathway, which plays a role in cell proliferation and tumor development. Moreover, CUR exhibits potent antioxidant activity, reducing oxidative stress and protecting cells from DNA damage. Utilizing CUR as a standalone treatment is limited by poor bioavailability, lack of targeting, and degradation susceptibility. Nanoparticle-based delivery systems can overcome these challenges. They enhance CUR’s bioavailability, protect it from degradation, and improve absorption. Further, Nanoparticles enable targeted delivery to lung cancer cells through surface modifications or ligand-based targeting, ensuring sustained release of CUR to prolong therapeutic effects, reduce administration frequency, and facilitate penetration through the tumor microenvironment, thereby enhancing CUR’s access to cancer cells. Thus, nanoparticle-based CUR delivery systems promise to improve lung cancer treatment outcomes. This article provides an overview of lung cancer, explores CUR nanoparticles as a treatment approach, discusses the benefits and challenges of nanoparticle-based drug delivery, and highlights prospects for CUR nanoparticles in lung cancer treatment. Future research aims to optimize these delivery systems for improved efficacy and patient prognosis in lung cancer.

Keywords: lung cancer, curcumin, nanomedicine, nanoparticle-based drug delivery

Procedia PDF Downloads 51
344 A Conceptual Study for Investigating the Preliminary State of Energy at the Birth of Universe and Understanding Its Emergence From the State of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a “neutral state” possessing an energy level which is referred to as the “base energy”. The governing principles of base energy are discussed in detail in our second paper in the series “A Conceptual Study for Addressing the Singularity of the Emerging Universe” which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 16
343 Investigation of Mass Transfer for RPB Distillation at High Pressure

Authors: Amiza Surmi, Azmi Shariff, Sow Mun Serene Lock

Abstract:

In recent decades, there has been a significant emphasis on the pivotal role of Rotating Packed Beds (RPBs) in absorption processes, encompassing the removal of Volatile Organic Compounds (VOCs) from groundwater, deaeration, CO2 absorption, desulfurization, and similar critical applications. The primary focus is elevating mass transfer rates, enhancing separation efficiency, curbing power consumption, and mitigating pressure drops. Additionally, substantial efforts have been invested in exploring the adaptation of RPB technology for offshore deployment. This comprehensive study delves into the intricacies of nitrogen removal under low temperature and high-pressure conditions, employing the high gravity principle via innovative RPB distillation concept with a specific emphasis on optimizing mass transfer. Based on the author's knowledge and comprehensive research, no cryogenic experimental testing was conducted to remove nitrogen via RPB. The research identifies pivotal process control factors through meticulous experimental testing, with pressure, reflux ratio, and reboil ratio emerging as critical determinants in achieving the desired separation performance. The results are remarkable, with nitrogen removal reaching less than one mole% in the Liquefied Natural Gas (LNG) product and less than three moles% methane in the nitrogen-rich gas stream. The study further unveils the mass transfer coefficient, revealing a noteworthy trend of decreasing Number of Transfer Units (NTU) and Area of Transfer Units (ATU) as the rotational speed escalates. Notably, the condenser and reboiler impose varying demands based on the operating pressure, with lower pressures at 12 bar requiring a more substantial duty than the 15-bar operation of the RPB. In pursuit of optimal energy efficiency, a meticulous sensitivity analysis is conducted, pinpointing the ideal combination of pressure and rotating speed that minimizes overall energy consumption. These findings underscore the efficiency of the RPB distillation approach in effecting efficient separation, even when operating under the challenging conditions of low temperature and high pressure. This achievement is attributed to a rigorous process control framework that diligently manages the operational pressure and temperature profile of the RPB. Nonetheless, the study's conclusions point towards the need for further research to address potential scaling challenges and associated risks, paving the way for the industrial implementation of this transformative technology.

Keywords: mass transfer coefficient, nitrogen removal, liquefaction, rotating packed bed

Procedia PDF Downloads 28