Search results for: complex correlation measure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11477

Search results for: complex correlation measure

1577 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density

Authors: Jill Round

Abstract:

In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.

Keywords: risk management, structural equation model, supervisory practice, three lines of defense

Procedia PDF Downloads 204
1576 Negotiating Autonomy in Women’s Political Participation: The Case of Elected Women’s Representatives from Jharkhand

Authors: Rajeshwari Balasubramanian, Margit Van Wessel, Nandini Deo

Abstract:

The participation of women in local bodies witnessed a rise after the implementation of 73rd and 74th Amendments to the Indian Constitution which created quotas for women representatives. However, even when participation increased, it did not translate into meaningful contributions by women in local bodies. This led some civil society organisations (CSOs) to begin working with women panchayat representatives in various states to build their capacity for political participation. The focus of this paper is to study capacity building training by CSOs in Jharkhand. The paper maps how the training helps women elected representatives to negotiate their autonomy at multiple levels. The paper describes the capacity building program conducted by an international feminist organisation along with its seven local partners in Jharkhand. The central question that the study asks is: How does capacity building training by CSOs in Jharkhand impact the autonomy of elected women representatives? It uses a qualitative research methodology based on empirical data gathered through field visits in four districts of Jharkhand (Chatra, Hazaribagh, East Singhbum and Ranchi) where the program was implemented for three years. The study found that women elected representatives had to develop strategies to negotiate their choice to move out of their homes and attend the training conducted by CSOs. The ability to participate in the training programs itself was a significant achievement of personal autonomy for many women. The training provided them a platform to voice their opinion and appreciate their own value as panchayat leaders. This realization allowed them to negotiate their presence and a space for themselves in Gram panchayats. A Foucauldian approach to analyze capacity building workshops might lead us to see them as systems in which CSOs impose a form of governmentality on rural elected representatives. Instead, what we see here is a much more complex negotiation of agency in which the CSO creates spaces and practices that allow women to achieve their own forms of autonomy. The study concludes that the impact of the training on the autonomy of these women is based on their everyday negotiations of time, space and mobility. Autonomy for these elected women representatives is also contextual and relative, as they seem to realize it during the training process. The training allows the women to not only negotiate their participation in panchayats but also challenge everyday practices that are rooted in patriarchy.

Keywords: autonomy, feminist organization, local bodies, political participation

Procedia PDF Downloads 132
1575 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography

Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner

Abstract:

Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.

Keywords: CBCT, C-arm, reconstruction, trajectory optimization

Procedia PDF Downloads 124
1574 Metamorphic Approach in Architecture Studio to Re-Imagine Drawings in Acknowledgement of Architectural/Artistic Identity

Authors: Hassan Wajid, Syed T. Ahmed, Syed G. Haider Jr., Razia Latif, Ahsan Ali, Maira Anam

Abstract:

The phenomenon of Metamorphosis can be associated with any object, organism, or structure gradually and progressively going through the change of systemic or morphological form. This phenomenon can be integrated while teaching drawing to architecture students. In architectural drawings, metamorphosis’s main focus and purpose are not to completely imitate any object. In the process of drawing, the changes in systemic or morphological form happen until the complete process, and the visuals of the complete process change the drawing, opening up possibilities for the imagination of the perceivers. Metamorphosis in architectural drawings begins with an initial form and, through various noticeable stages, ends up final form or manifestation. How much of the initial form is manifested in the final form and progressively among various intermediate stages becomes an indication of the nature of metamorphosis as a phenomenon. It is important at this stage to clarify that the term metamorphosis is presently being coopted from its original domain, usually in life sciences. In this current exercise, the architectural drawings are to act as an operative analog process transforming one image of art and/or architecture in its broadest sense. That composition is claimed to have come from one source (individual work, a cultural artifact, civilizational remain). It dialectically meets, opposes, or confronts some carefully chosen alien opposites from a different domain. As an example, the layers of a detailed drawing of a Turkish prayer rug of 5 x 7 ratio over a detailed architectural plan of a religious, historical complex can be observed such that the two drawings, though at markedly different scales could dialectically converse with one another and through their mutual congruencies. In the final stage, the idea concludes contradictions across the scales to initiate the analogous roles of metamorphosed third reality, which suggests the previous un-acknowledged architectural or artistic identity. The proposed paper explores the trajectory of reproduction by analyzing drawings through detailed drawing stages and analyzes challenges as well as opportunities in the discovered realm of imagination. This description further aims at identifying factors influencing creativity and innovation in producing architectural drawings through the process of observing drawings from inception to the concluding stage.

Keywords: architectural drawings, metamorphosis, perceptions, discovery

Procedia PDF Downloads 91
1573 Suicide Conceptualization in Adolescents through Semantic Networks

Authors: K. P. Valdés García, E. I. Rodríguez Fonseca, L. G. Juárez Cantú

Abstract:

Suicide is a global, multidimensional and dynamic problem of mental health, which requires a constant study for its understanding and prevention. When research of this phenomenon is done, it is necessary to consider the different characteristics it may have because of the individual and sociocultural variables, the importance of this consideration is related to the generation of effective treatments and interventions. Adolescents are a vulnerable population due to the characteristics of the development stage. The investigation was carried out with the objective of identifying and describing the conceptualization of adolescents of suicide, and in this process, we find possible differences between men and women. The study was carried out in Saltillo, Coahuila, Mexico. The sample was composed of 418 volunteer students aged between 11 and 18 years. The ethical aspects of the research were reviewed and considered in all the processes of the investigation with the participants, their parents and the schools to which they belonged, psychological attention was offered to the participants and preventive workshops were carried in the educational institutions. Natural semantic networks were the instrument used, since this hybrid method allows to find and analyze the social concept of a phenomenon; in this case, the word suicide was used as an evocative stimulus and participants were asked to evoke at least five words and a maximum 10 that they thought were related to suicide, and then hierarchize them according to the closeness with the construct. The subsequent analysis was carried with Excel, yielding the semantic weights, affective loads and the distances between each of the semantic fields established according to the words reported by the subjects. The results showed similarities in the conceptualization of suicide in adolescents, men and women. Seven semantic fields were generated; the words were related in the discourse analysis: 1) death, 2) possible triggering factors, 3) associated moods, 4) methods used to carry it out, 5) psychological symptomatology that could affect, 6) words associated with a rejection of suicide, and finally, 7) specific objects to carry it out. One of the necessary aspects to consider in the investigations of complex issues such as suicide is to have a diversity of instruments and techniques that adjust to the characteristics of the population and that allow to understand the phenomena from the social constructs and not only theoretical. The constant study of suicide is a pressing need, the loss of a life from emotional difficulties that can be solved through psychiatry and psychological methods requires governments and professionals to pay attention and work with the risk population.

Keywords: adolescents, psychological construct, semantic networks, suicide

Procedia PDF Downloads 98
1572 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach

Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton

Abstract:

Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.

Keywords: competition, growth, model, thinning

Procedia PDF Downloads 111
1571 The Development of a Precision Irrigation System for Durian

Authors: Chatrabhuti Pipop, Visessri Supattra, Charinpanitkul Tawatchai

Abstract:

Durian is one of the top agricultural products exported by Thailand. There is the massive market potential for the durian industry. While the global demand for Thai durians, especially the demand from China, is very high, Thailand's durian supply is far from satisfying strong demand. Poor agricultural practices result in low yields and poor quality of fruit. Most irrigation systems currently used by the farmers are fixed schedule or fixed rates that ignore actual weather conditions and crop water requirements. In addition, the technologies emerging are too difficult and complex and prices are too high for the farmers to adopt and afford. Many farmers leave the durian trees to grow naturally. With improper irrigation and nutrient management system, durians are vulnerable to a variety of issues, including stunted growth, not flowering, diseases, and death. Technical development or research for durian is much needed to support the wellbeing of the farmers and the economic development of the country. However, there are a limited number of studies or development projects for durian because durian is a perennial crop requiring a long time to obtain the results to report. This study, therefore, aims to address the problem of durian production by developing an autonomous and precision irrigation system. The system is designed and equipped with an industrial programmable controller, a weather station, and a digital flow meter. Daily water requirements are computed based on weather data such as rainfall and evapotranspiration for daily irrigation with variable flow rates. A prediction model is also designed as a part of the system to enhance the irrigation schedule. Before the system was installed in the field, a simulation model was built and tested in a laboratory setting to ensure its accuracy. Water consumption was measured daily before and after the experiment for further analysis. With this system, the crop water requirement is precisely estimated and optimized based on the data from the weather station. Durian will be irrigated at the right amount and at the right time, offering the opportunity for higher yield and higher income to the farmers.

Keywords: Durian, precision irrigation, precision agriculture, smart farm

Procedia PDF Downloads 97
1570 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures

Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny

Abstract:

Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.

Keywords: pyrolysis, torrefaction, biooil, lignin

Procedia PDF Downloads 304
1569 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 330
1568 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 107
1567 The Gaps of Environmental Criminal Liability in Armed Conflicts and Its Consequences: An Analysis under Stockholm, Geneva and Rome

Authors: Vivian Caroline Koerbel Dombrowski

Abstract:

Armed conflicts have always meant the ultimate expression of power and at the same time, lack of understanding among nations. Cities were destroyed, people were killed, assets were devastated. But these are not only the loss of a war: the environmental damage comes to be considered immeasurable losses in the short, medium and long term. And this is because no nation wants to bear that cost. They invest in military equipment, training, technical equipment but the environmental account yet finds gaps in international law. Considering such a generalization in rights protection, many nations are at imminent danger in a conflict if the water will be used as a mass weapon, especially if we consider important rivers such as Jordan, Euphrates and Nile. The top three international documents were analyzed on the subject: the Stockholm Convention (1972), Additional Protocol I to the Geneva Convention (1977) and the Rome Statute (1998). Indeed, some references are researched in doctrine, especially scientific articles, to substantiate with consistent data about the extent of the damage, historical factors and decisions which have been successful. However, due to the lack of literature about this subject, the research tends to be exhaustive. From the study of the indicated material, it was noted that international law - humanitarian and environmental - calls in some of its instruments the environmental protection in war conflicts, but they are generic and vague rules that do not define exactly what is the environmental damage , nor sets standards for measure them. Taking into account the mains conflicts of the century XX: World War II, the Vietnam War and the Gulf War, one must realize that the environmental consequences were of great rides - never deactivated landmines, buried nuclear weapons, armaments and munitions destroyed in the soil, chemical weapons, not to mention the effects of some weapons when used (uranium, agent Orange, etc). Extending the search for more recent conflicts such as Afghanistan, it is proven that the effects on health of the civilian population were catastrophic: cancer, birth defects, and deformities in newborns. There are few reports of nations that, somehow, repaired the damage caused to the environment as a result of the conflict. In the pitch of contemporary conflicts, many nations fear that water resources are used as weapons of mass destruction, because once contaminated - directly or indirectly - can become a means of disguised genocide side effect of military objective. In conclusion, it appears that the main international treaties governing the subject mention the concern for environmental protection, however leave the normative specifications vacancies necessary to effectively there is a prevention of environmental damage in armed conflict and, should they occur, the repair of the same. Still, it appears that there is no protection mechanism to safeguard natural resources and avoid them to become a mass destruction weapon.

Keywords: armed conflicts, criminal liability, environmental damages, humanitarian law, mass weapon

Procedia PDF Downloads 407
1566 Assessing Information Dissemination Of Group B Streptococcus In Antenatal Clinics, and Obstetricians and Midwives’ Opinions on the Importance of Doing so

Authors: Aakriti Chetan Shah, Elle Sein

Abstract:

Background/purpose: Group B Streptococcus(GBS) is the leading cause of severe early onset infection in newborns, with the incidence of Early Onset Group B Streptococcus (EOGBS) in the UK and Ireland rising from 0.48 to 0.57 per 1000 births from 2000 to 2015. A WHO study conducted in 2017, has shown that 38.5% of cases can result in stillbirth and infant deaths. This is an important problem to consider as 20% of women worldwide have GBS colonisation and can suffer from these detrimental effects. Current Royal College of Obstetricians and Midwives (RCOG) guidelines do not recommend bacteriological screening for pregnant women due to its low sensitivity in antenatal screening correlating with the neonate having GBS but advise a patient information leaflet be given to pregnant women. However, a Healthcare Safety Investigation Branch (HSIB) 2019 learning report found that only 50% of trusts and health boards reported giving GBS information leaflets to all pregnant mothers. Therefore, this audit aimed to assess current practices of information dissemination about GBS at Chelsea & Westminster (C&W) Hospital. Methodology: A quantitative cross-sectional study was carried out using a questionnaire based on the RCOG GBS guidelines and the HSIB Learning report. The study was conducted in antenatal clinics at Chelsea & Westminster Hospital, from 29th January 2021 to 14th February 2021, with twenty-two practicing obstetricians and midwives participating in the survey. The main outcome measure was the proportion of obstetricians and midwives who disseminate information about GBS to pregnant women, and the reasons behind why they do or do not. Results: 22 obstetricians and midwives responded with 18 complete responses. Of which 12 were obstetricians and 6 were midwives. Only 17% of clinical staff routinely inform all pregnant women about GBS, and do so at varying timeframes of the pregnancy, with an equal split in the first, second and third trimester. The primary reason for not informing women about GBS was influenced by three key factors: Deemed relevant only for patients at high risk of GBS, lack of time in clinic appointments and no routine NHS screening available. Interestingly 58% of staff in the antenatal clinic believe it is necessary to inform all women about GBS and its importance. Conclusion: It is vital for obstetricians and midwives to inform all pregnant women about GBS due to the high prevalence of incidental carriers in the population, and the harmful effects it can cause for neonates. Even though most clinicians believe it is important to inform all pregnant women about GBS, most do not. To ensure that RCOG and HSIB recommendations are followed, we recommend that women should be given this information at 28 weeks gestation in the antenatal clinic. Proposed implementations include an information leaflet to be incorporated into the Mum and Baby app, an informative video and end-to-end digital clinic documentation to include this information sharing prompt.

Keywords: group B Streptococcus, early onset sepsis, Antenatal care, Neonatal morbidity, GBS

Procedia PDF Downloads 163
1565 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg

Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum

Abstract:

The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.

Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg

Procedia PDF Downloads 77
1564 Learners' Perception of Digitalization of Medical Education in a Low Middle-Income Country – A Case Study of the Lecturio Platform

Authors: Naomi Nathan

Abstract:

Introduction Digitalization of medical education can revolutionize how medical students learn and interact with the medical curriculum across contexts. With the increasing availability of the internet and mobile connectivity in LMICs, online medical education platforms and digital learning tools are becoming more widely available, providing new opportunities for learners to access high-quality medical education and training. However, the adoption and integration of digital technologies in medical education in LMICs is a complex process influenced by various factors, including learners' perceptions and attitudes toward digital learning. In Ethiopia, the adoption of digital platforms for medical education has been slow, with traditional face-to-face teaching methods still being the norm. However, as access to technology improves and more universities adopt digital platforms, it is crucial to understand how medical students perceive this shift. Methodology This study investigated medical students' perception of the digitalization of medical education in relation to their access to the Lecturio Digital Medical Education Platform through a capacity-building project. 740 medical students from over 20 medical universities participated in the study. The students were surveyed using a questionnaire that included their attitudes toward the digitalization of medical education, their frequency of use of the digital platform, and their perceived benefits and challenges. Results The study results showed that most medical students had a positive attitude toward digitalizing medical education. The most commonly cited benefit was the convenience and flexibility of accessing course material/curriculum online. Many students also reported that they found the platform more interactive and engaging, leading to a more meaningful learning experience. The study also identified several challenges medical students faced when using the platform. The most commonly reported challenge was the need for more reliable internet access, which made it difficult for students to access content consistently. Overall, the results of this study suggest that medical students in Ethiopia have a positive perception of the digitalization of medical education. Over 97% of students continuously expressed a need for access to the Lecturio platform throughout their studies. Conclusion Significant challenges still need to be addressed to fully realize the Lecturio digital platform's benefits. Universities, relevant ministries, and various stakeholders must work together to address these challenges to ensure that medical students fully participate in and benefit from digitalized medical education - sustainably and effectively.

Keywords: digital medical education, EdTech, LMICs, e-learning

Procedia PDF Downloads 77
1563 Food Foam Characterization: Rheology, Texture and Microstructure Studies

Authors: Rutuja Upadhyay, Anurag Mehra

Abstract:

Solid food foams/cellular foods are colloidal systems which impart structure, texture and mouthfeel to many food products such as bread, cakes, ice-cream, meringues, etc. Their heterogeneous morphology makes the quantification of structure/mechanical relationships complex. The porous structure of solid food foams is highly influenced by the processing conditions, ingredient composition, and their interactions. Sensory perceptions of food foams are dependent on bubble size, shape, orientation, quantity and distribution and determines the texture of foamed foods. The state and structure of the solid matrix control the deformation behavior of the food, such as elasticity/plasticity or fracture, which in turn has an effect on the force-deformation curves. The obvious step in obtaining the relationship between the mechanical properties and the porous structure is to quantify them simultaneously. Here, we attempt to research food foams such as bread dough, baked bread and steamed rice cakes to determine the link between ingredients and the corresponding effect of each of them on the rheology, microstructure, bubble size and texture of the final product. Dynamic rheometry (SAOS), confocal laser scanning microscopy, flatbed scanning, image analysis and texture profile analysis (TPA) has been used to characterize the foods studied. In all the above systems, there was a common observation that when the mean bubble diameter is smaller, the product becomes harder as evidenced by the increase in storage and loss modulus (G′, G″), whereas when the mean bubble diameter is large the product is softer with decrease in moduli values (G′, G″). Also, the bubble size distribution affects texture of foods. It was found that bread doughs with hydrocolloids (xanthan gum, alginate) aid a more uniform bubble size distribution. Bread baking experiments were done to study the rheological changes and mechanisms involved in the structural transition of dough to crumb. Steamed rice cakes with xanthan gum (XG) addition at 0.1% concentration resulted in lower hardness with a narrower pore size distribution and larger mean pore diameter. Thus, control of bubble size could be an important parameter defining final food texture.

Keywords: food foams, rheology, microstructure, texture

Procedia PDF Downloads 318
1562 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe

Authors: Ahmad Haidar

Abstract:

Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.

Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market

Procedia PDF Downloads 61
1561 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 210
1560 The Effect of Metal-Organic Framework Pore Size to Hydrogen Generation of Ammonia Borane via Nanoconfinement

Authors: Jing-Yang Chung, Chi-Wei Liao, Jing Li, Bor Kae Chang, Cheng-Yu Wang

Abstract:

Chemical hydride ammonia borane (AB, NH3BH3) draws attentions to hydrogen energy researches for its high theoretical gravimetrical capacity (19.6 wt%). Nevertheless, the elevated AB decomposition temperatures (Td) and unwanted byproducts are main hurdles in practical application. It was reported that the byproducts and Td can be reduced with nanoconfinement technique, in which AB molecules are confined in porous materials, such as porous carbon, zeolite, metal-organic frameworks (MOFs), etc. Although nanoconfinement empirically shows effectiveness on hydrogen generation temperature reduction in AB, the theoretical mechanism is debatable. Low Td was reported in AB@IRMOF-1 (Zn4O(BDC)3, BDC = benzenedicarboxylate), where Zn atoms form closed metal clusters secondary building unit (SBU) with no exposed active sites. Other than nanosized hydride, it was also observed that catalyst addition facilitates AB decomposition in the composite of Li-catalyzed carbon CMK-3, MOF JUC-32-Y with exposed Y3+, etc. It is believed that nanosized AB is critical for lowering Td, while active sites eliminate byproducts. Nonetheless, some researchers claimed that it is the catalytic sites that are the critical factor to reduce Td, instead of the hydride size. The group physically ground AB with ZIF-8 (zeolitic imidazolate frameworks, (Zn(2-methylimidazolate)2)), and found similar reduced Td phenomenon, even though AB molecules were not ‘confined’ or forming nanoparticles by physical hand grinding. It shows the catalytic reaction, not nanoconfinement, leads to AB dehydrogenation promotion. In this research, we explored the possible criteria of hydrogen production temperature from nanoconfined AB in MOFs with different pore sizes and active sites. MOFs with metal SBU such as Zn (IRMOF), Zr (UiO), and Al (MIL-53), accompanying with various organic ligands (BDC and BPDC; BPDC = biphenyldicarboxylate) were modified with AB. Excess MOFs were used for AB size constrained in micropores estimated by revisiting Horvath-Kawazoe model. AB dissolved in methanol was added to MOFs crystalline with MOF pore volume to AB ratio 4:1, and the slurry was dried under vacuum to collect AB@MOF powders. With TPD-MS (temperature programmed desorption with mass spectroscopy), we observed Td was reduced with smaller MOF pores. For example, it was reduced from 100°C to 64°C when MOF micropore ~1 nm, while ~90°C with pore size up to 5 nm. The behavior of Td as a function of AB crystalline radius obeys thermodynamics when the Gibbs free energy of AB decomposition is zero, and no obvious correlation with metal type was observed. In conclusion, we discovered Td of AB is proportional to the reciprocal of MOF pore size, possibly stronger than the effect of active sites.

Keywords: ammonia borane, chemical hydride, metal-organic framework, nanoconfinement

Procedia PDF Downloads 171
1559 Unspoken Playground Rules Prompt Adolescents to Avoid Physical Activity: A Focus Group Study of Constructs in the Prototype Willingness Model

Authors: Catherine Wheatley, Emma L. Davies, Helen Dawes

Abstract:

The health benefits of exercise are widely recognised, but numerous interventions have failed to halt a sharp decline in physical activity during early adolescence. Many such projects are underpinned by the Theory of Planned Behaviour, yet this model of rational decision-making leaves variance in behavior unexplained. This study investigated whether the Prototype Willingness Model, which proposes a second, reactive decision-making path to account for spontaneous responses to the social environment, has potential to improve understanding of adolescent exercise behaviour in school by exploring constructs in the model with young people. PE teachers in 4 Oxfordshire schools each nominated 6 pupils who were active in school, and 6 who were inactive, to participate in the study. Of these, 45 (22 male) aged 12-13 took part in 8 focus group discussions. These were transcribed and subjected to deductive thematic analysis to search for themes relating to the prototype willingness model. Participants appeared to make rational decisions about commuting to school or attending sports clubs, but spontaneous choices to be inactive during both break and PE. These reactive decisions seemed influenced by a social context described as more ‘judgmental’ than primary school, characterised by anxiety about physical competence, negative peer evaluation and inactive playground norms. Participants described their images of typical active and inactive adolescents: active images included negative social characteristics including ‘show-off’. There was little concern about the long-term risks of inactivity, although participants seemed to recognise that physical activity is healthy. The Prototype Willingness Model might more fully explain young adolescents’ physical activity in school than rational behavioural models, indicating potential for physical activity interventions that target social anxieties in response to the changing playground environment. Images of active types could be more complex than earlier research has suggested, and their negative characteristics might influence willingness to be active.

Keywords: adolescence, physical activity, prototype willingness model, school

Procedia PDF Downloads 329
1558 Information-Controlled Laryngeal Feature Variations in Korean Consonants

Authors: Ponghyung Lee

Abstract:

This study seeks to investigate the variations occurring to Korean consonantal variations center around laryngeal features of the concerned sounds, to the exclusion of others. Our fundamental premise is that the weak contrast associated with concerned segments might be held accountable for the oscillation of the status quo of the concerned consonants. What is more, we assume that an array of notions as a measure of communicative efficiency of linguistic units would be significantly influential on triggering those variations. To this end, we have tried to compute the surprisal, entropic contribution, and relative contrastiveness associated with Korean obstruent consonants. What we found therein is that the Information-theoretic perspective is compelling enough to lend support our approach to a considerable extent. That is, the variant realizations, chronologically and stylistically, prove to be profoundly affected by a set of Information-theoretic factors enumerated above. When it comes to the biblical proper names, we use Georgetown University CQP Web-Bible corpora. From the 8 texts (4 from Old Testament and 4 from New Testament) among the total 64 texts, we extracted 199 samples. We address the issue of laryngeal feature variations associated with Korean obstruent consonants under the presumption that the variations stem from the weak contrast among the triad manifestations of laryngeal features. The variants emerge from diverse sources in chronological and stylistic senses: Christianity biblical texts, ordinary casual speech, the shift of loanword adaptation over time, and ideophones. For the purpose of discussing what they are really like from the perspective of Information Theory, it is necessary to closely look at the data. Among them, the massive changes occurring to loanword adaptation of proper nouns during the centennial history of Korean Christianity draw our special attention. We searched 199 types of initially capitalized words among 45,528-word tokens, which account for around 5% of total 901,701-word tokens (12,786-word types) from Georgetown University CQP Web-Bible corpora. We focus on the shift of the laryngeal features incorporated into word-initial consonants, which are available through the two distinct versions of Korean Bible: one came out in the 1960s for the Protestants, and the other was published in the 1990s for the Catholic Church. Of these proper names, we have closely traced the adaptation of plain obstruents, e. g. /b, d, g, s, ʤ/ in the sources. The results show that as much as 41% of the extracted proper names show variations; 37% in terms of aspiration, and 4% in terms of tensing. This study set out in an effort to shed light on the question: to what extent can we attribute the variations occurring to the laryngeal features associated with Korean obstruent consonants to the communicative aspects of linguistic activities? In this vein, the concerted effects of the triad, of surprisal, entropic contribution, and relative contrastiveness can be credited with the ups and downs in the feature specification, despite being contentiousness on the role of surprisal to some extent.

Keywords: entropic contribution, laryngeal feature variation, relative contrastiveness, surprisal

Procedia PDF Downloads 113
1557 Knowledge Co-Production on Future Climate-Change-Induced Mass-Movement Risks in Alpine Regions

Authors: Elisabeth Maidl

Abstract:

The interdependence of climate change and natural hazard goes along with large uncertainties regarding future risks. Regional stakeholders, experts in natural hazards management and scientists have specific knowledge, resp. mental models on such risks. This diversity of views makes it difficult to find common and broadly accepted prevention measures. If the specific knowledge of these types of actors is shared in an interactive knowledge production process, this enables a broader and common understanding of complex risks and allows to agree on long-term solution strategies. Previous studies on mental models confirm that actors with specific vulnerabilities perceive different aspects of a topic and accordingly prefer different measures. In bringing these perspectives together, there is the potential to reduce uncertainty and to close blind spots in solution finding. However, studies that examine the mental models of regional actors on future concrete mass movement risks are lacking so far. The project tests and evaluates the feasibility of knowledge co-creation for the anticipatory prevention of climate change-induced mass movement risks in the Alps. As a key element, mental models of the three included groups of actors are compared. Being integrated into the research program Climate Change Impacts on Alpine Mass Movements (CCAMM2), this project is carried out in two Swiss mountain regions. The project is structured in four phases: 1) the preparatory phase, in which the participants are identified, 2) the baseline phase, in which qualitative interviews and a quantitative pre-survey are conducted with actors 3) the knowledge-co-creation phase, in which actors have a moderated exchange meeting, and a participatory modelling workshop on specific risks in the region, and 4) finally a public information event. Results show that participants' mental models are based on the place of origin, profession, believes, values, which results in narratives on climate change and hazard risks. Further, the more intensively participants interact with each other, the more likely is that they change their views. This provides empirical evidence on how changes in opinions and mindsets can be induced and fostered.

Keywords: climate change, knowledge-co-creation, participatory process, natural hazard risks

Procedia PDF Downloads 52
1556 Time to Second Line Treatment Initiation Among Drug-Resistant Tuberculosis Patients in Nepal

Authors: Shraddha Acharya, Sharad Kumar Sharma, Ratna Bhattarai, Bhagwan Maharjan, Deepak Dahal, Serpahine Kaminsa

Abstract:

Background: Drug-resistant (DR) tuberculosis (TB) continues to be a threat in Nepal, with an estimated 2800 new cases every year. The treatment of DR-TB with second line TB drugs is complex and takes longer time with comparatively lower treatment success rate than drug-susceptible TB. Delay in treatment initiation for DR-TB patients might further result in unfavorable treatment outcomes and increased transmission. This study thus aims to determine median time taken to initiate second-line treatment among Rifampicin Resistant (RR) diagnosed TB patients and to assess the proportion of treatment delays among various type of DR-TB cases. Method: A retrospective cohort study was done using national routine electronic data (DRTB and TB Laboratory Patient Tracking System-DHIS2) on drug resistant tuberculosis patients between January 2020 and December 2022. The time taken for treatment initiation was computed as– days from first diagnosis as RR TB through Xpert MTB/Rif test to enrollment on second-line treatment. The treatment delay (>7 days after diagnosis) was calculated. Results: Among total RR TB cases (N=954) diagnosed via Xpert nationwide, 61.4% were enrolled under shorter-treatment regimen (STR), 33.0% under longer treatment regimen (LTR), 5.1% for Pre-extensively drug resistant TB (Pre-XDR) and 0.4% for Extensively drug resistant TB (XDR) treatment. Among these cases, it was found that the median time from diagnosis to treatment initiation was 6 days (IQR:2-15.8). The median time was 5 days (IQR:2.0-13.3) among STR, 6 days (IQR:3.0-15.0) among LTR, 30 days (IQR:5.5-66.8) among Pre-XDR and 4 days (IQR:2.5-9.0) among XDR TB cases. The overall treatment delay (>7 days after diagnosis) was observed in 42.4% of the patients, among which, cases enrolled under Pre-XDR contributed substantially to treatment delay (72.0%), followed by LTR (43.6%), STR (39.1%) and XDR (33.3%). Conclusion: Timely diagnosis and prompt treatment initiation remain fundamental focus of the National TB program. The findings of the study, however suggest gaps in timeliness of treatment initiation for the drug-resistant TB patients, which could bring adverse treatment outcomes. Moreover, there is an alarming delay in second line treatment initiation for the Pre-XDR TB patients. Therefore, this study generates evidence to identify existing gaps in treatment initiation and highlights need for formulating specific policies and intervention in creating effective linkage between the RR TB diagnosis and enrollment on second line TB treatment with intensified efforts from health providers for follow-ups and expansion of more decentralized, adequate, and accessible diagnostic and treatment services for DR-TB, especially Pre-XDR TB cases, due to the observed long treatment delays.

Keywords: drug-resistant, tuberculosis, treatment initiation, Nepal, treatment delay

Procedia PDF Downloads 67
1555 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 333
1554 Harnessing the Power of Mixed Ligand Complexes: Enhancing Antimicrobial Activities with Thiosemicarbazones

Authors: Sakshi Gupta, Seema Joshi

Abstract:

Thiosemicarbazones (TSCs) have garnered significant attention in coordination chemistry due to their versatile coordination modes and pharmacological properties. Mixed ligand complexes of TSCs represent a promising area of research, offering enhanced antimicrobial activities compared to their parent compounds. This review provides an overview of the synthesis, characterization, and antimicrobial properties of mixed ligand complexes incorporating thiosemicarbazones. The synthesis of mixed ligand complexes typically involves the reaction of a metal salt with TSC ligands and additional ligands, such as nitrogen- or oxygen-based ligands. Various transition metals, including copper, nickel, and cobalt, have been employed to form mixed ligand complexes with TSCs. Characterization techniques such as spectroscopy, X-ray crystallography, and elemental analysis are commonly utilized to confirm the structures of these complexes. One of the key advantages of mixed ligand complexes is their enhanced antimicrobial activity compared to pure TSC compounds. The synergistic effect between the TSC ligands and additional ligands contributes to increased efficacy, possibly through improved metal-ligand interactions or enhanced membrane permeability. Furthermore, mixed ligand complexes offer the potential for selective targeting of microbial species while minimizing toxicity to mammalian cells. This selectivity arises from the specific interactions between the metal center, TSC ligands, and biological targets within microbial cells. Such targeted antimicrobial activity is crucial for developing effective treatments with minimal side effects. Moreover, the versatility of mixed ligand complexes allows for the design of tailored antimicrobial agents with optimized properties. By varying the metal ion, TSC ligands, and additional ligands, researchers can fine-tune the physicochemical properties and biological activities of these complexes. This tunability opens avenues for the development of novel antimicrobial agents with improved efficacy and reduced resistance. In conclusion, mixed ligand complexes of thiosemicarbazones represent a promising class of compounds with potent antimicrobial activities. Further research in this field holds great potential for the development of novel therapeutic agents to combat microbial infections effectively.

Keywords: metal complex, thiosemicarbazones, mixed ligand, selective targeting, antimicrobial activity

Procedia PDF Downloads 42
1553 Petrogenesis and Tectonic Implication of the Oligocene Na-Rich Granites from the North Sulawesi Arc, Indonesia

Authors: Xianghong Lu, Yuejun Wang, Chengshi Gan, Xin Qian

Abstract:

The North Sulawesi Arc, located on the east of Indonesia and to the south of the Celebes Sea, is the north part of the K-shape of Sulawesi Island and has a complex tectonic history since the Cenozoic due to the convergence of three plates (Eurasia, India-Australia and Pacific plates). Published rock records contain less precise chronology, mostly using K-Ar dating, and rare geochemistry data, which limit the understanding of the regional tectonic setting. This study presents detailed zircon U-Pb geochronological and Hf-O isotope and whole-rock geochemical analyses for the Na-rich granites from the North Sulawesi Arc. Zircon U-Pb geochronological analyses of three representative samples yield weighted mean ages of 30.4 ± 0.4 Ma, 29.5 ± 0.2 Ma, and 27.3 ± 0.4 Ma, respectively, revealing the Oligocene magmatism in the North Sulawesi Arc. The samples have high Na₂O and low K₂O contents with high Na₂O/K₂O ratios, belonging to Low-K tholeiitic Na-rich granites. The Na-rich granites are characterized by high SiO₂ contents (75.05-79.38 wt.%) and low MgO contents (0.07-0.91 wt.%) and show arc-like trace elemental signatures. They have low (⁸⁷Sr/⁸⁶Sr)i ratios (0.7044-0.7046), high εNd(t) values (from +5.1 to +6.6), high zircon εHf(t) values (from +10.1 to +18.8) and low zircon δ18O values (3.65-5.02). They show an Indian-Ocean affinity of Pb isotopic compositions with ²⁰⁶Pb/²⁰⁴Pb ratio of 18.16-18.37, ²⁰⁷Pb/²⁰⁴Pb ratio of 15.56-15.62, and ²⁰⁸Pb/²⁰⁴Pb ratio of 38.20-38.66. These geochemical signatures suggest that the Oligocene Na-rich granites from the North Sulawesi Arc formed by partial melting of the juvenile oceanic crust with sediment-derived fluid-related metasomatism in a subducting setting and support an intra-oceanic arc origin. Combined with the published study, the emergence of extensive calc-alkaline felsic arc magmatism can be traced back to the Early Oligocene period, subsequent to the Eocene back-arc basalts (BAB) that share similarity with the Celebes Sea basement. Since the opening of the Celebes Sea started from the Eocene (42~47 Ma) and stopped by the Early Oligocene (~32 Ma), the geodynamical mechanism of the formation of the Na-rich granites from the North Sulawesi Arc during the Oligocene might relate to the subduction of the Indian Ocean.

Keywords: North Sulawesi Arc, oligocene, Na-rich granites, in-situ zircon Hf–O analysis, intra-oceanic origin

Procedia PDF Downloads 58
1552 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: anammox, filter media, kinetics, nitrogen removal

Procedia PDF Downloads 370
1551 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector

Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar

Abstract:

Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.

Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector

Procedia PDF Downloads 318
1550 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 74
1549 The Development of Documentary Filmmaking in Early Independent India

Authors: Camille Deprez

Abstract:

This paper proposes to present research findings of an ongoing Hong Kong government-funded project on ‘The Documentary Film in India (1948-1975)’ (GRF 1240314), for which an extensive research fieldwork has been carried out in various archives in India. This project investigates the role and significance of the Indian documentary film sector from the inauguration of the state-sponsored Films Division one year after independence in 1948 until the declaration of a ‘State of Emergency’ in 1975. The documentary film production of this first period of national independence was characterised by increasing formal experimentation and analytical social and political enquiry, and by a complex, mixed structure of state-sponsored monopoly and free-market operation. However, that production remains significantly under-researched. What were the main production, distribution and exhibition strategies over this period? What were the recurrent themes and stylistic features of the films produced? In the new context of national independence (in which the State considered film as means of mass persuasion), consolidation of the commercial film, and the emergence of television and art cinema, what role did official, professional and creative factors play in the development of the documentary film sector? What were the impact of such films and the challenges faced by the documentary film in India? Based upon the crossed-analysis of primary written research documents, interviews and relevant films, this study interweaves empirical study of the sector's financing, production, distribution and exhibition strategies, as well as the films' content and form, with the larger historical context of India over the period from 1948 to 1975. Whilst most of the films made within the sector explored social issues, they were rarely able to do so from an overtly critical perspective. However, this paper proposes to analyse the contribution of important filmmakers and producers, including Ezra Mir, Paul Zils, Jean Bhownagary, S. Sukhdev, S. N. S. Sastri, and P. Pati, to the development of the Indian documentary film sector and style within and outside the remits of Films Division. It will more specifically assess the extent to which they criticised the State, showed the inequalities in Indian society and explored film form.

Keywords: documentary film, film archives, film history, India

Procedia PDF Downloads 279
1548 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data

Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene

Abstract:

Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.

Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging

Procedia PDF Downloads 257