Search results for: dummy injury assessment reference values
2521 Body Types of Softball Players in the 39th National Games of Thailand
Authors: Nopadol Nimsuwan, Sumet Prom-in
Abstract:
The purpose of this study was to investigate the body types, size, and body compositions of softball players in the 39th National Games of Thailand. The population of this study was 352 softball players who participated in the 39th National Games of Thailand from which a sample size of 291 was determined using the Taro Yamane formula and selection is made with stratified sampling method. The data collected were weight, height, arm length, leg length, chest circumference, mid-upper arm circumference, calf circumference, subcutaneous fat in the upper arm area, the scapula bone area, above the pelvis area, and mid-calf area. Keys and Brozek formula was used to calculate the fat quantity, Kitagawa formula to calculate the muscle quantity, and Heath and Carter method was used to determine the values of body dimensions. The results of the study can be concluded as follows. The average body dimensions of the male softball players were the endo-mesomorph body type while the average body dimensions of female softball players were the meso-endomorph body type. When considered according to the softball positions, it was found that the male softball players in every position had the endo-mesomorph body type while the female softball players in every position had the meso-endomorph body type except for the center fielder that had the endo-ectomorph body type. The endo-mesomorph body type is suitable for male softball players, and the meso-endomorph body type is suitable for female softball players because these body types are suitable for the five basic softball skills which are: gripping, throwing, catching, hitting, and base running. Thus, people related to selecting softball players to play in sports competitions of different levels should consider factors in terms of body type, size, and body components of the players.Keywords: body types, softball players, national games of Thailand, social sustainability
Procedia PDF Downloads 4872520 Functional Plasma-Spray Ceramic Coatings for Corrosion Protection of RAFM Steels in Fusion Energy Systems
Authors: Chen Jiang, Eric Jordan, Maurice Gell, Balakrishnan Nair
Abstract:
Nuclear fusion, one of the most promising options for reliably generating large amounts of carbon-free energy in the future, has seen a plethora of ground-breaking technological advances in recent years. An efficient and durable “breeding blanket”, needed to ensure a reactor’s self-sufficiency by maintaining the optimal coolant temperature as well as by minimizing radiation dosage behind the blanket, still remains a technological challenge for the various reactor designs for commercial fusion power plants. A relatively new dual-coolant lead-lithium (DCLL) breeder design has exhibited great potential for high-temperature (>700oC), high-thermal-efficiency (>40%) fusion reactor operation. However, the structural material, namely reduced activation ferritic-martensitic (RAFM) steel, is not chemically stable in contact with molten Pb-17%Li coolant. Thus, to utilize this new promising reactor design, the demand for effective corrosion-resistant coatings on RAFM steels represents a pressing need. Solution Spray Technologies LLC (SST) is developing a double-layer ceramic coating design to address the corrosion protection of RAFM steels, using a novel solution and solution/suspension plasma spray technology through a US Department of Energy-funded project. Plasma spray is a coating deposition method widely used in many energy applications. Novel derivatives of the conventional powder plasma spray process, known as the solution-precursor and solution/suspension-hybrid plasma spray process, are powerful methods to fabricate thin, dense ceramic coatings with complex compositions necessary for the corrosion protection in DCLL breeders. These processes can be used to produce ultra-fine molten splats and to allow fine adjustment of coating chemistry. Thin, dense ceramic coatings with chosen chemistry for superior chemical stability in molten Pb-Li, low activation properties, and good radiation tolerance, is ideal for corrosion-protection of RAFM steels. A key challenge is to accommodate its CTE mismatch with the RAFM substrate through the selection and incorporation of appropriate bond layers, thus allowing for enhanced coating durability and robustness. Systematic process optimization is being used to define the optimal plasma spray conditions for both the topcoat and bond-layer, and X-ray diffraction and SEM-EDS are applied to successfully validate the chemistry and phase composition of the coatings. The plasma-sprayed double-layer corrosion resistant coatings were also deposited onto simulated RAFM steel substrates, which are being tested separately under thermal cycling, high-temperature moist air oxidation as well as molten Pb-Li capsule corrosion conditions. Results from this testing on coated samples, and comparisons with bare RAFM reference samples will be presented and conclusions will be presented assessing the viability of the new ceramic coatings to be viable corrosion prevention systems for DCLL breeders in commercial nuclear fusion reactors.Keywords: breeding blanket, corrosion protection, coating, plasma spray
Procedia PDF Downloads 3112519 Heavy Metal Contamination in Ship Breaking Yard, A Case Study in Bangladesh
Authors: Mohammad Mosaddik Rahman
Abstract:
This study embarks on an exploratory journey to assess the pervasive issue of heavy metal contamination in the water bodies along Chittagong Coast, Bangladesh. Situated along the mesmerizing Bay of Bengal, known for its potential as an emerging tourist haven, economic zone, ship breaking yard, confronts significant environmental hurdles. The core of these challenges lies in the contamination from heavy metals such as lead, cadmium, chromium, and mercury, which detrimentally impact both the ecological integrity and public health of the region. This contamination primarily stems from industrial activities, particularly those involving metallurgical and chemical processes, which release these metals into the environment, leading to their accumulation in soil and water bodies. The study's primary aim is to conduct a thorough assessment of heavy metal pollution levels, alongside an analysis of nutrient variations, focusing on nitrates and nitrites. Methodologically, the study leverages systematic sampling and advanced analytical tools like the Hach 3900 spectrophotometer to ensure precise and reliable data collection. The implications of heavy metal presence are multifaceted, affecting microbial and aquatic life, and posing severe health risks to the local population, including respiratory problems, neurological disorders, and an increased risk of cancer. The results of this study highlight the urgent need for effective mitigation strategies and regulatory measures to address this critical issue. By providing a comprehensive understanding of the environmental and public health implications of heavy metal contamination in Chittagong Coast, this research endeavours to serve as a catalyst for change, emphasising the need for pollution control and advancements in water management policies. It is envisioned that the outcomes of this study will guide stakeholders in collaborating to develop and implement sustainable solutions, ultimately safeguarding the region’s environment and public health.Keywords: heavy metal, environmental health, pollution control policies, shipbreaking yard
Procedia PDF Downloads 612518 Assessment of Acute Oral Toxicity Studies and Anti Diabetic Activity of Herbal Mediated Nanomedicine
Authors: Shanker Kalakotla, Krishna Mohan Gottumukkala
Abstract:
Diabetes is a metabolic disorder characterized by hyperglycemia, carbohydrates, altered lipids and proteins metabolism. In recent research nanotechnology is a blazing field for the researchers; latterly there has been prodigious excitement in the nanomedicine and nano pharmacological area for the study of silver nanoparticles synthesis using natural products. Biological methods have been used to synthesize silver nanoparticles in presence of medicinally active antidiabetic plants, and this intention made us assess the biologically synthesized silver nanoparticles from the seed extract of Psoralea corylfolia using 1 mM silver nitrate solution. The synthesized herbal mediated silver nanoparticles (HMSNP’s) then subjected to various characterization techniques such as XRD, SEM, EDX, TEM, DLS, UV and FT-IR respectively. In current study, the silver nanoparticles tested for in-vitro anti-diabetic activity and possible toxic effects in healthy female albino mice by following OECD guidelines-425. Herbal mediated silver nanoparticles were successfully obtained from bioreduction of silver nitrate using Psoralea corylifolia plant extract. Silver nanoparticles have been appropriately characterized and confirmed using different types of equipment viz., UV-vis spectroscopy, XRD, FTIR, DLS, SEM and EDX analysis. From the behavioral observations of the study, the female albino mice did not show sedation, respiratory arrest, and convulsions. Test compounds did not cause any mortality at the dose level tested (i.e., 2000 mg/kg body weight) doses till the end of 14 days of observation and were considered safe. It may be concluded that LD50 of the HMSNPs was 2000mg/kg body weight. Since LD50 of the HMSNPs was 2000mg/kg body weight, so the preferred dose range for HMSNPs falls between the levels of 200 and 400 mg/kg. Further In-vivo pharmacological models and biochemical investigations will clearly elucidate the mechanism of action and will be helpful in projecting the currently synthesized silver nanoparticles as a therapeutic target in treating chronic ailments.Keywords: herbal mediated silver nanoparticles, HMSNPs, toxicity of silver nanoparticles, PTP1B in-vitro anti-diabetic assay female albino mice, 425 OECD guidelines
Procedia PDF Downloads 2762517 Duration of the Disease in Systemic Sclerosis and Efficiency of Rituximab Therapy
Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva
Abstract:
Objectives: The duration of the disease could be one of the leading factors in the effectiveness of therapy in systemic sclerosis (SSc). The aim of the study was to assess how the duration of the disease affects the changes of lung function in patients(pts) with interstitial lung disease (ILD) associated with SSc during long-term RTX therapy. Methods: We prospectively included 113pts with SSc in this study. 85% of pts were female. Mean age was 48.1±13years. The diffuse cutaneous subset of the disease had 62pts, limited–40, overlap–11. The mean disease duration was 6.1±5.4years. Pts were divided into 2 groups depending on the disease duration - group 1 (less than 5 years-63pts) and group 2 (more than 5 years-50 pts). All pts received prednisolone at mean dose of 11.5±4.6 mg/day and 53 of them - immunosuppressants at inclusion. The parameters were evaluated over the periods: at baseline (point 0), 13±2.3mo (point 1), 42±14mo (point 2) and 79±6.5mo (point 3) after initiation of RTX therapy. Cumulative mean dose of RTX in group 1 at point 1 was 1.7±0.6 g, at point 2 = 3.3±1.5g, at point 3 = 3.9±2.3g; in group 2 at point 1 = 1.6±0.6g, at point 2 = 2.7±1.5 g, at point 3 = 3.7±2.6 g. The results are presented in the form of mean values, delta(Δ), median(me), upper and lower quartile. Results. There was a significant increase of forced vital capacity % predicted (FVC) in both groups, but at points 1 and 2 the improvement was more significant in group 1. In group 2, an improvement of FVC was noted with a longer follow-up. Diffusion capacity for carbon monoxide % predicted (DLCO) remained stable at point 1, and then significantly improved by the 3rd year of RTX therapy in both groups. In group 1 at point 1: ΔFVC was 4.7 (me=4; [-1.8;12.3])%, ΔDLCO = -1.2 (me=-0.3; [-5.3;3.6])%, at point 2: ΔFVC = 9.4 (me=7.1; [1;16])%, ΔDLCO =3.7 (me=4.6; [-4.8;10])%, at point 3: ΔFVC = 13 (me=13.4; [2.3;25.8])%, ΔDLCO = 2.3 (me=1.6; [-5.6;11.5])%. In group 2 at point 1: ΔFVC = 3.4 (me=2.3; [-0.8;7.9])%, ΔDLCO = 1.5 (me=1.5; [-1.9;4.9])%; at point 2: ΔFVC = 7.6 (me=8.2; [0;12.6])%, ΔDLCO = 3.5 (me=0.7; [-1.6;10.7]) %; at point 3: ΔFVC = 13.2 (me=10.4; [2.8;15.4])%, ΔDLCO = 3.6 (me=1.7; [-2.4;9.2])%. Conclusion: Patients with an early SSc have more quick response to RTX therapy already in 1 year of follow-up. Patients with a disease duration more than 5 years also have response to therapy, but with longer treatment. RTX is effective option for the treatment of ILD-SSc, regardless of the duration of the disease.Keywords: interstitial lung disease, systemic sclerosis, rituximab, disease duration
Procedia PDF Downloads 282516 Multi-source Question Answering Framework Using Transformers for Attribute Extraction
Authors: Prashanth Pillai, Purnaprajna Mangsuli
Abstract:
Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.Keywords: natural language processing, deep learning, transformers, information retrieval
Procedia PDF Downloads 1952515 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 3102514 Clinical and Epidemiological Profile of Patients with Chronic Obstructive Pulmonary Disease in a Medical Institution from the City of Medellin, Colombia
Authors: Camilo Andres Agudelo-Velez, Lina María Martinez-Sanchez, Natalia Perilla-Hernandez, Maria De Los Angeles Rodriguez-Gazquez, Felipe Hernandez-Restrepo, Dayana Andrea Quintero-Moreno, Camilo Ruiz-Mejia, Isabel Cristina Ortiz-Trujillo, Monica Maria Zuluaga-Quintero
Abstract:
Chronic obstructive pulmonary disease is common condition, characterized by a persistent blockage of airflow, partially reversible and progressive, that represents 5% of total deaths around the world, and it is expected to become the third leading cause of death by 2030. Objective: To establish the clinical and epidemiological profile of patients with chronic obstructive pulmonary disease in a medical institution from the city of Medellin, Colombia. Methods: A cross-sectional study was performed, with a sample of 50 patients with a diagnosis of chronic obstructive pulmonary disease in a private institution in Medellin, during 2015. The software SPSS vr. 20 was used for the statistical analysis. For the quantitative variables, averages, standard deviations, and maximun and minimun values were calculated, while for ordinal and nominal qualitative variables, proportions were estimated. Results: The average age was 73.5±9.3 years, 52% of the patients were women, 50% of them had retired, 46% ere married and 80% lived in the city of Medellín. The mean time of diagnosis was 7.8±1.3 years and 100% of the patients were treated at the internal medicine service. The most common clinical features were: 36% were classified as class D for the disease, 34% had a FEV1 <30%, 88% had a history of smoking and 52% had oxygen therapy at home. Conclusion: It was found that class D was the most common, and the majority of the patients had a history of smoking, indicating the need to strengthen promotion and prevention strategies in this regard.Keywords: pulmonary disease, chronic obstructive, pulmonary medicine, oxygen inhalation therapy
Procedia PDF Downloads 4482513 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case
Authors: Arzu K. Kamberli, Tolga Ulusoy
Abstract:
Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.Keywords: econophysics, financial crisis, Gauss's Law, physics
Procedia PDF Downloads 1552512 Effect of Mixture of Flaxseed and Pumpkin Seeds Powder on Hypercholesterolemia
Authors: Zahra Ashraf
Abstract:
Flax and pumpkin seeds are a rich source of unsaturated fatty acids, antioxidants and fiber, known to have anti-atherogenic properties. Hypercholesterolemia is a state characterized by the elevated level of cholesterol in the blood. This research was designed to study the effect of flax and pumpkin seeds powder mixture on hypercholesterolemia and body weight. Rat’s species were selected as human representative. Thirty male albino rats were divided into three groups: a control group, a CD-chol group (control diet+cholesterol) fed with 1.5% cholesterol and FP-chol group (flaxseed and pumpkin seed powder+ cholesterol) fed with 1.5% cholesterol. Flax and pumpkin seed powder mixed at proportion of (5/1) (omega-3 and omega-6). Blood samples were collected to examine lipid profile and body weight was also measured. Thus the data was subjected to analysis of variance. In CD-chol group, body weight, total cholesterol TC, triacylglycerides TG in plasma, plasma LDL-C, ratio significantly increased with a decrease in plasma HDL (good cholesterol). In FP-chol group lipid parameters and body weights were decreased significantly with an increase in HDL and decrease in LDL (bad cholesterol). The mean values of body weight, total cholesterol, triglycerides, low density lipoprotein and high density lipoproteins in FP-chol group were 240.66±11.35g, 59.60±2.20mg/dl, 50.20±1.79 mg/dl, 36.20±1.62mg/dl, 36.40±2.20 mg/dl, respectively. Flaxseed and pumpkin seeds powder mixture showed reduction in body weight, serum cholesterol, low density lipoprotein and triglycerides. While significant increase was shown in high density lipoproteins when given to hypercholesterolemic rats. Our results suggested that flax and pumpkin seed mixture has hypocholesterolemic effects which were probably mediated by polyunsaturated fatty acids (omega-3 and omega-6) present in seed mixture.Keywords: hypercolesterolemia, omega 3 and omega 6 fatty acids, cardiovascular diseases
Procedia PDF Downloads 4212511 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index
Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad
Abstract:
Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.Keywords: indicators, aggregation, principle component analysis, weighting, index score
Procedia PDF Downloads 1622510 Deconstructing Reintegration Services for Survivors of Human Trafficking: A Feminist Analysis of Australian and Thai Government and Non-Government Responses
Authors: Jessica J. Gillies
Abstract:
Awareness of the tragedy that is human trafficking has increased exponentially over the past two decades. The four pillars widely recognised as global solutions to the problem are prevention, prosecution, protection, and partnership between government and non-government organisations. While ‘sex-trafficking’ initially received major attention, this focus has shifted to other industries that conceal broader experiences of exploitation. However, within the regions of focus for this study, namely Australia and Thailand, trafficking for the purpose of sexual exploitation remains the commonly uncovered narrative of criminal justice investigations. In these regions anti-trafficking action is characterised by government-led prevention and prosecution efforts; whereas protection and reintegration practices have received criticism. Typically, non-government organisations straddle the critical chasm between policy and practice; therefore, they are perfectly positioned to contribute valuable experiential knowledge toward understanding how both sectors can support survivors in the post-trafficking experience. The aim of this research is to inform improved partnerships throughout government and non-government post-trafficking services by illuminating gaps in protection and reintegration initiatives. This research will explore government and non-government responses to human trafficking in Thailand and Australia, in order to understand how meaning is constructed in this context and how the construction of meaning effects survivors in the post-trafficking experience. A qualitative, three-stage methodology was adopted for this study. The initial stage of enquiry consisted of a discursive analysis, in order to deconstruct the broader discourses surrounding human trafficking. The data included empirical papers, grey literature such as publicly available government and non-government reports, and anti-trafficking policy documents. The second and third stages of enquiry will attempt to further explore the findings of the discourse analysis and will focus more specifically on protection and reintegration in Australia and Thailand. Stages two and three will incorporate process observations in government and non-government survivor support services, and semi-structured interviews with employees and volunteers within these settings. Two key findings emerged from the discursive analysis. The first exposed conflicting feminist arguments embedded throughout anti-trafficking discourse. Informed by conflicting feminist discourses on sex-work, a discursive relationship has been constructed between sex-industry policy and anti-trafficking policy. In response to this finding, data emerging from the process observations and semi-structured interviews will be interpreted using a feminist theoretical framework. The second finding progresses from the construction in the first. The discursive construction of sex-trafficking appears to have had influence over perceptions of the legitimacy of survivors, and therefore the support they receive in the post-trafficking experience. For example; women who willingly migrate for employment in the sex-industry, and on arrival are faced with exploitative conditions, are not perceived to be deserving of the same support as a woman who is not coerced, but rather physically forced, into such circumstances, yet both meet the criteria for a victim of human trafficking. The forthcoming study is intended to contribute toward building knowledge and understanding around the implications of the construction of legitimacy; and contextualise this in reference to government led protection and reintegration support services for survivors in the post-trafficking experience.Keywords: Australia, government, human trafficking, non-government, reintegration, Thailand
Procedia PDF Downloads 1132509 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 2172508 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange
Authors: Fatemeh Rouhi, Hadi Nassiri
Abstract:
Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences
Procedia PDF Downloads 3242507 Subsurface Structures Delineation and Tectonic History Investigation Using Gravity, Magnetic and Well Data, in the Cyrenaica Platform, NE Libya
Authors: Mohamed Abdalla Saleem
Abstract:
Around one hundred wells were drilled in the Cyrenaica platform north-east Libya, and almost all of them were dry. Although the drilled samples reveal good oil shows and good source rock maturity. Most of the upper Cretaceous age and the above deposit successions are outcrops in different places. We have a thorough understanding and mapping of the structures related to the Cretaceous and above Cenozoic Era. But the subsurface beneath these outcrops still needs more investigation and delineation. This study aims to give answers to some questions about the tectonic history and the types of structures that are distributed in the area using gravity, magnetic, and well data. According to the information that has been obtained from groups of wells drilled in concessions 31, 35, and 37, one can note that the depositional sections become ticker and deeper southward. The topography map of the study area shows that the area is highly elevated at the north, about 300 m above the sea level, while the minimum elevation (16–18 m) exists nearly in the middle (lat. 30°). South to this latitude, the area is started elevated again (more than 100 m). The third-order residual gravity map, which was constructed from the Bouguer gravity map, reveals that the area is dominated by a large negative anomaly working as a sub-basin (245 km x 220 km), which means a very thick depositional section, and the basement is very deep. The mentioned depocenter is surrounded by four high gravity anomalies (12-37 mGal), which means a shallow basement and a relative thinner succession of sediments. The highest gravity values are located beside the coast line. The total horizontal gradient (THG) map reveals various systems of structures, the first system where the structures are oriented NE-SW, which is crosscut by the second regime extending NW-SE. This second system is distributed through the whole area, but it is very strong and shallow near the coast line and at the south part, while it is relatively deep at the middle depocenter area.Keywords: cyrenaica platform, gravity, structures, basement, tectonic history
Procedia PDF Downloads 172506 Balanced Scorecard (BSC) Project : A Methodological Proposal for Decision Support in a Corporate Scenario
Authors: David de Oliveira Costa, Miguel Ângelo Lellis Moreira, Carlos Francisco Simões Gomes, Daniel Augusto de Moura Pereira, Marcos dos Santos
Abstract:
Strategic management is a fundamental process for global companies that intend to remain competitive in an increasingly dynamic and complex market. To do so, it is necessary to maintain alignment with their principles and values. The Balanced Scorecard (BSC) proposes to ensure that the overall business performance is based on different perspectives (financial, customer, internal processes, and learning and growth). However, relying solely on the BSC may not be enough to ensure the success of strategic management. It is essential that companies also evaluate and prioritize strategic projects that need to be implemented to ensure they are aligned with the business vision and contribute to achieving established goals and objectives. In this context, the proposition involves the incorporation of the SAPEVO-M multicriteria method to indicate the degree of relevance between different perspectives. Thus, the strategic objectives linked to these perspectives have greater weight in the classification of structural projects. Additionally, it is proposed to apply the concept of the Impact & Probability Matrix (I&PM) to structure and ensure that strategic projects are evaluated according to their relevance and impact on the business. By structuring the business's strategic management in this way, alignment and prioritization of projects and actions related to strategic planning are ensured. This ensures that resources are directed towards the most relevant and impactful initiatives. Therefore, the objective of this article is to present the proposal for integrating the BSC methodology, the SAPEVO-M multicriteria method, and the prioritization matrix to establish a concrete weighting of strategic planning and obtain coherence in defining strategic projects aligned with the business vision. This ensures a robust decision-making support process.Keywords: MCDA process, prioritization problematic, corporate strategy, multicriteria method
Procedia PDF Downloads 832505 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process
Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei
Abstract:
Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.Keywords: Iranian processed cheese, emulsifying salt, rheology, texture
Procedia PDF Downloads 4342504 Compositional Assessment of Fermented Rice Bran and Rice Bran Oil and Their Effect on High Fat Diet Induced Animal Model
Authors: Muhammad Ali Siddiquee, Md. Alauddin, Md. Omar Faruque, Zakir Hossain Howlader, Mohammad Asaduzzaman
Abstract:
Rice bran (RB) and rice bran oil (RBO) are explored as prominent food components worldwide. In this study, fermented rice bran (FRB) was produced by employing edible gram-positive bacteria (Lactobacillus acidophilus, Lactobacillus bulgaricus, and Bifidobacterium bifidum) at 125 x 10⁵ spore g⁻¹ of rice bran, and investigated to evaluate nutritional quality. The crude rice bran oil (CRBO) was extracted from RB, and its quality was also investigated compared to market-available rice bran oil (MRBO) in Bangladesh. We found that fermentation of rice bran with lactic acid bacteria increased total proteins (29.52%), fat (5.38%), ash (48.47%), crude fiber (38.96%), and moisture (61.04%) and reduced the carbohydrate content (36.61%). We also found that essential amino acids (methionine, tryptophan, threonine, valine, leucine, lysine, histidine, and phenylalanine) and non-essential amino acids (alanine, aspartate, glycine, glutamine, proline, serine, and tyrosine) were increased in FRB except methionine and proline. Moreover, total phenolic content, tannin content, flavonoid content, and antioxidant activity were increased in FRB. The RBO analysis showed that γ-oryzanol content (10.00mg/g) was found in CRBO compared to MRBO (ranging from 7.40 to 12.70 mg/g) and Vitamin-E content 0.20% was found higher in CRBO compared to MRBO (ranging 0.097 to 0.12%). The total saturated (25.16%) and total unsaturated fatty acids (74.44%) were found in CRBO, whereas MRBO contained total saturated (22.08 to 24.13%) and total unsaturated fatty acids (71.91 to 83.29%), respectively. The physiochemical parameters were found satisfactory in all samples except acid value and peroxide value higher in CRBO. Finally, animal experiments showed that FRB and CRBO reduce the body weight, glucose, and lipid profile in high-fat diet-induced animal models. Thus, FRB and RBO could be value-added food supplements for human health.Keywords: fermented rice bran, crude rice bran oil, amino acids, proximate composition, gamma-oryzanol, fatty acids, heavy metals, physiochemical parameters
Procedia PDF Downloads 682503 Carbon Pool Assessment in Community Forests, Nepal
Authors: Medani Prasad Rijal
Abstract:
Forest itself is a factory as well as product. It supplies tangible and intangible goods and services. It supplies timber, fuel wood, fodder, grass leaf litter as well as non timber edible goods and medicinal and aromatic products additionally provides environmental services. These environmental services are of local, national or even global importance. In Nepal, more than 19 thousands community forests are providing environmental service in less economic benefit than actual efficiency. There is a risk of cost of management of those forest exceeds benefits and forests get converted to open access resources in future. Most of the environmental goods and services do not have markets which mean no prices at which they are available to the consumers, therefore the valuation of these services goods and services establishment of paying mechanism for such services and insure the benefit to community is more relevant in local as well as global scale. There are few examples of carbon trading in domestic level to meet the country wide emission goal. In this contest, the study aims to explore the public attitude towards carbon offsetting and their responsibility over service providers. This study helps in promotion of environment service awareness among general people, service provider and community forest. The research helps to unveil the carbon pool scenario in community forest and willingness to pay for carbon offsetting of people who are consuming more energy than general people and emitting relatively more carbon in atmosphere. The study has assessed the carbon pool status in two community forest and valuated carbon service from community forest through willingness to pay in Dharan municipality situated in eastern. In the study, in two community forests carbon pools were assessed following the guideline “Forest Carbon Inventory Guideline 2010” prescribed by Ministry of Forest and soil Conservation, Nepal. Final outcomes of analysis in intensively managed area of Hokse CF recorded as 103.58 tons C /ha with 6173.30 tons carbon stock. Similarly in Hariyali CF carbon density was recorded 251.72 mg C /ha. The total carbon stock of intensively managed blocks in Hariyali CF is 35839.62 tons carbon.Keywords: carbon, offsetting, sequestration, valuation, willingness to pay
Procedia PDF Downloads 3562502 The Reality of Teaching Arabic for Specific Purposes in Educational Institutions
Authors: Mohammad Anwarul Kabir, Fayezul Islam
Abstract:
Language invariably is learned / taught to be used primarily as means of communications. Teaching a language for its native audience differs from teaching it to non-native audience. Moreover, teaching a language for communication only is different from teaching it for specific purposes. Arabic language is primarily regarded as the language of the Quran and the Sunnah (Prophetic tradition). Arabic is, therefore, learnt and spread all over the globe. However, Arabic is also a cultural heritage shared by all Islamic nations which has used Arabic for a long period to record the contributions of Muslim thinkers made in the field of wide spectrum of knowledge and scholarship. That is why the phenomenon of teaching Arabic by different educational institutes became quite rife, and the idea of teaching Arabic for specific purposes is heavily discussed in the academic sphere. Although the number of learners of Arabic is increasing consistently, yet their purposes vary. These include religious purpose, international trade, diplomatic purpose, better livelihood in the Arab world extra. By virtue of this high demand for learning Arabic, numerous institutes have been established all over the world including Bangladesh. This paper aims at focusing on the current status of the language institutes which has been established for learning Arabic for specific purposes in Bangladesh including teaching methodology, curriculum, and teachers’ quality. Such curricula and using its materials resulted in a lot of problems. The least, it confused teachers and students as well. Islamic educationalists have been working hard to professionally meet the need. They are following a systematic approach of stating clear and achievable goals, building suitable content, and applying new technology to present these learning experiences and evaluate them. It also suggests a model for designing instructional systems that responds to the need of non-Arabic speaking Islamic communities and provide the knowledge needed in both linguistic and cultural aspects. It also puts forward a number of suggestions for the improvement of the teaching / learning Arabic for specific purposes in Bangladesh after a detailed investigation in the following areas: curriculum, teachers’ skills, method of teaching and assessment policy.Keywords: communication, Quran, sunnah, educational institutes, specific purposes, curriculum, method of teaching
Procedia PDF Downloads 2822501 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data
Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos
Abstract:
Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia
Procedia PDF Downloads 312500 Groundwater Investigation Using Resistivity Method and Drilling for Irrigation during the Dry Season in Lwantonde District, Uganda
Authors: Tamale Vincent
Abstract:
Groundwater investigation is the investigation of underground formations to understand the hydrologic cycle, known groundwater occurrences, and identify the nature and types of aquifers. There are different groundwater investigation methods and surface geophysical method is one of the groundwater investigation more especially the Geoelectrical resistivity Schlumberger configuration method which provides valuable information regarding the lateral and vertical successions of subsurface geomaterials in terms of their individual thickness and corresponding resistivity values besides using surface geophysical method, hydrogeological and geological investigation methods are also incorporated to aid in preliminary groundwater investigation. Investigation for groundwater in lwantonde district has been implemented. The area project is located cattle corridor and the dry seasonal troubles the communities in lwantonde district of which 99% of people living there are farmers, thus making agriculture difficult and local government to provide social services to its people. The investigation was done using the Geoelectrical resistivity Schlumberger configuration method. The measurement point is located in the three sub-counties, with a total of 17 measurement points. The study location is at 0025S, 3110E, and covers an area of 160 square kilometers. Based on the results of the Geoelectrical information data, it was found two types of aquifers, which are open aquifers in depth ranging from six meters to twenty-two meters and a confined aquifer in depth ranging from forty-five meters to eighty meters. In addition to the Geoelectrical information data, drilling was done at an accessible point by heavy equipment in the Lwakagura village, Kabura sub-county. At the drilling point, artesian wells were obtained at a depth of eighty meters and can rise to two meters above the soil surface. The discovery of artesian well is then used by residents to meet the needs of clean water and for irrigation considering that in this area most wells contain iron content.Keywords: artesian well, geoelectrical, lwantonde, Schlumberger
Procedia PDF Downloads 1292499 Energy Options and Environmental Impacts of Carbon Dioxide Utilization Pathways
Authors: Evar C. Umeozor, Experience I. Nduagu, Ian D. Gates
Abstract:
The energy requirements of carbon dioxide utilization (CDU) technologies/processes are diverse, so also are their environmental footprints. This paper explores the energy and environmental impacts of systems for CO₂ conversion to fuels, chemicals, and materials. Energy needs of the technologies and processes deployable in CO₂ conversion systems are met by one or combinations of hydrogen (chemical), electricity, heat, and light. Likewise, the environmental footprint of any CO₂ utilization pathway depends on the systems involved. So far, evaluation of CDU systems has been constrained to particular energy source/type or a subset of the overall system needed to make CDU possible. This introduces limitations to the general understanding of the energy and environmental implications of CDU, which has led to various pitfalls in past studies. A CDU system has an energy source, CO₂ supply, and conversion units. We apply a holistic approach to consider the impacts of all components in the process, including various sources of energy, CO₂ feedstock, and conversion technologies. The electricity sources include nuclear power, renewables (wind and solar PV), gas turbine, and coal. Heat is supplied from either electricity or natural gas, and hydrogen is produced from either steam methane reforming or electrolysis. The CO₂ capture unit uses either direct air capture or post-combustion capture via amine scrubbing, where applicable, integrated configurations of the CDU system are explored. We demonstrate how the overall energy and environmental impacts of each utilization pathway are obtained by aggregating the values for all components involved. Proper accounting of the energy and emission intensities of CDU must incorporate total balances for the utilization process and differences in timescales between alternative conversion pathways. Our results highlight opportunities for the use of clean energy sources, direct air capture, and a number of promising CO₂ conversion pathways for producing methanol, ethanol, synfuel, urea, and polymer materials.Keywords: carbon dioxide utilization, processes, energy options, environmental impacts
Procedia PDF Downloads 1502498 Multiple Version of Roman Domination in Graphs
Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera
Abstract:
In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values
Procedia PDF Downloads 1112497 Assessment of Groundwater Aquifer Impact from Artificial Lagoons and the Reuse of Wastewater in Qatar
Authors: H. Aljabiry, L. Bailey, S. Young
Abstract:
Qatar is a desert with an average temperature 37⁰C, reaching over 40⁰C during summer. Precipitation is uncommon and mostly in winter. Qatar depends on desalination for drinking water and on groundwater and recycled water for irrigation. Water consumption and network leakage per capita in Qatar are amongst the highest in the world; re-use of treated wastewater is extremely limited with only 14% of treated wastewater being used for irrigation. This has led to the country disposing of unwanted water from various sources in lagoons situated around the country, causing concern over the possibility of environmental pollution. Accordingly, our hypothesis underpinning this research is that the quality and quantity of water in lagoons is having an impact on the groundwater reservoirs in Qatar. Lagoons (n = 14) and wells (n = 55) were sampled for both summer and winter in 2018 (summer and winter). Water, adjoining soil and plant samples were analysed for multiple elements by Inductively Coupled Plasma Mass Spectrometry. Organic and inorganic carbon were measured (CN analyser) and the major anions were determined by ion chromatography. Salinization in both the lagoon and the wells was seen with good correlations between Cl⁻, Na⁺, Li, SO₄, S, Sr, Ca, Ti (p-value < 0.05). Association of heavy metals was observed of Ni, Cu, Ag, and V, Cr, Mo, Cd which is due to contamination from anthropological activities such as wastewater disposal or spread of contaminated dust. However, looking at each elements none of them exceeds the Qatari regulation. Moreover, gypsum saturation in the system was observed in both the lagoon and wells water samples. Lagoons and the water of the well are found to be of a saline type as well as Ca²⁺, Cl⁻, SO₄²⁻ type evidencing both gypsum dissolution and salinization in the system. Moreover, Maps produced by Inverse distance weighting showed an increasing level of Nitrate in the groundwater in winter, and decrease chloride and sulphate level, indicating recharge effect after winter rain events. While E. coli and faecal bacteria were found in most of the lagoons, biological analysis for wells needs to be conducted to understand the biological contamination from lagoon water infiltration. As a conclusion, while both the lagoon and the well showed the same results, more sampling is needed to understand the impact of the lagoons on the groundwater.Keywords: groundwater quality, lagoon, treated wastewater, water management, wastewater treatment, wetlands
Procedia PDF Downloads 1372496 Synthetic Bis(2-Pyridylmethyl)Amino-Chloroacetyl Chloride- Ethylenediamine-Grafted Graphene Oxide Sheets Combined with Magnetic Nanoparticles: Remove Metal Ions and Catalytic Application
Authors: Laroussi Chaabane, Amel El Ghali, Emmanuel Beyou, Mohamed Hassen V. Baouab
Abstract:
In this research, the functionalization of graphene oxide sheets by ethylenediamine (EDA) was accomplished and followed by the grafting of bis(2-pyridylmethyl) amino group (BPED) onto the activated graphene oxide sheets in the presence of chloroacetylchloride (CAC) and then combined with magnetic nanoparticles (Fe₃O₄NPs) to produce a magnetic graphene-based composite [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. The physicochemical properties of [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] composites were investigated by Fourier transform infrared (FT-IR), scanning electron microscopy (SEM), X-ray diffraction (XRD), thermogravimetric analysis (TGA). Additionally, the catalysts can be easily recycled within ten seconds by using an external magnetic field. Moreover, [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] was used for removing Cu(II) ions from aqueous solutions using a batch process. The effect of pH, contact time and temperature on the metal ions adsorption were investigated, however weakly dependent on ionic strength. The maximum adsorption capacity values of Cu(II) on the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] at the pH of 6 is 3.46 mmol.g⁻¹. To examine the underlying mechanism of the adsorption process, pseudo-first, pseudo-second-order, and intraparticle diffusion models were fitted to experimental kinetic data. Results showed that the pseudo-second-order equation was appropriate to describe the Cu (II) adsorption by [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. Adsorption data were further analyzed by the Langmuir, Freundlich, and Jossens adsorption approaches. Additionally, the adsorption properties of the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED], their reusability (more than 6 cycles) and durability in the aqueous solutions open the path to removal of Cu(II) from water solution. Based on the results obtained, we report the activity of Cu(II) supported on [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] as a catalyst for the cross-coupling of symmetric alkynes.Keywords: graphene, magnetic nanoparticles, adsorption kinetics/isotherms, cross coupling
Procedia PDF Downloads 1412495 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 5012494 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing
Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang
Abstract:
The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.Keywords: additive manufacturing, generative design, robot, sustainability
Procedia PDF Downloads 1342493 Satellite Photogrammetry for DEM Generation Using Stereo Pair and Automatic Extraction of Terrain Parameters
Authors: Tridipa Biswas, Kamal Pandey
Abstract:
A Digital Elevation Model (DEM) is a simple representation of a surface in 3 dimensional space with elevation as the third dimension along with X (horizontal coordinates) and Y (vertical coordinates) in rectangular coordinates. DEM has wide applications in various fields like disaster management, hydrology and watershed management, geomorphology, urban development, map creation and resource management etc. Cartosat-1 or IRS P5 (Indian Remote Sensing Satellite) is a state-of-the-art remote sensing satellite built by ISRO (May 5, 2005) which is mainly intended for cartographic applications.Cartosat-1 is equipped with two panchromatic cameras capable of simultaneous acquiring images of 2.5 meters spatial resolution. One camera is looking at +26 degrees forward while another looks at –5 degrees backward to acquire stereoscopic imagery with base to height ratio of 0.62. The time difference between acquiring of the stereopair images is approximately 52 seconds. The high resolution stereo data have great potential to produce high-quality DEM. The high-resolution Cartosat-1 stereo image data is expected to have significant impact in topographic mapping and watershed applications. The objective of the present study is to generate high-resolution DEM, quality evaluation in different elevation strata, generation of ortho-rectified image and associated accuracy assessment from CARTOSAT-1 data based Ground Control Points (GCPs) for Aglar watershed (Tehri-Garhwal and Dehradun district, Uttarakhand, India). The present study reveals that generated DEMs (10m and 30m) derived from the CARTOSAT-1 stereo pair is much better and accurate when compared with existing DEMs (ASTER and CARTO DEM) also for different terrain parameters like slope, aspect, drainage, watershed boundaries etc., which are derived from the generated DEMs, have better accuracy and results when compared with the other two (ASTER and CARTO) DEMs derived terrain parameters.Keywords: ASTER-DEM, CARTO-DEM, CARTOSAT-1, digital elevation model (DEM), ortho-rectified image, photogrammetry, RPC, stereo pair, terrain parameters
Procedia PDF Downloads 3122492 Evaluation of a Staffing to Workload Tool in a Multispecialty Clinic Setting
Authors: Kristin Thooft
Abstract:
— Increasing pressure to manage healthcare costs has resulted in shifting care towards ambulatory settings and is driving a focus on cost transparency. There are few nurse staffing to workload models developed for ambulatory settings, less for multi-specialty clinics. Of the existing models, few have been evaluated against outcomes to understand any impact. This evaluation took place after the AWARD model for nurse staffing to workload was implemented in a multi-specialty clinic at a regional healthcare system in the Midwest. The multi-specialty clinic houses 26 medical and surgical specialty practices. The AWARD model was implemented in two specialty practices in October 2020. Donabedian’s Structure-Process-Outcome (SPO) model was used to evaluate outcomes based on changes to the structure and processes of care provided. The AWARD model defined and quantified the processes, recommended changes in the structure of day-to-day nurse staffing. Cost of care per patient visit, total visits, a total nurse performed visits used as structural and process measures, influencing the outcomes of cost of care and access to care. Independent t-tests were used to compare the difference in variables pre-and post-implementation. The SPO model was useful as an evaluation tool, providing a simple framework that is understood by a diverse care team. No statistically significant changes in the cost of care, total visits, or nurse visits were observed, but there were differences. Cost of care increased and access to care decreased. Two weeks into the post-implementation period, the multi-specialty clinic paused all non-critical patient visits due to a second surge of the COVID-19 pandemic. Clinic nursing staff was re-allocated to support the inpatient areas. This negatively impacted the ability of the Nurse Manager to utilize the AWARD model to plan daily staffing fully. The SPO framework could be used for the ongoing assessment of nurse staffing performance. Additional variables could be measured, giving a complete picture of the impact of nurse staffing. Going forward, there must be a continued focus on the outcomes of care and the value of nursingKeywords: ambulatory, clinic, evaluation, outcomes, staffing, staffing model, staffing to workload
Procedia PDF Downloads 176