Search results for: exact analytical calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4065

Search results for: exact analytical calculation

555 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals

Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly

Abstract:

Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.

Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery

Procedia PDF Downloads 303
554 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 52
553 Documenting the 15th Century Prints with RTI

Authors: Peter Fornaro, Lothar Schmitt

Abstract:

The Digital Humanities Lab and the Institute of Art History at the University of Basel are collaborating in the SNSF research project ‘Digital Materiality’. Its goal is to develop and enhance existing methods for the digital reproduction of cultural heritage objects in order to support art historical research. One part of the project focuses on the visualization of a small eye-catching group of early prints that are noteworthy for their subtle reliefs and glossy surfaces. Additionally, this group of objects – known as ‘paste prints’ – is characterized by its fragile state of preservation. Because of the brittle substances that were used for their production, most paste prints are heavily damaged and thus very hard to examine. These specific material properties make a photographic reproduction extremely difficult. To obtain better results we are working with Reflectance Transformation Imaging (RTI), a computational photographic method that is already used in archaeological and cultural heritage research. This technique allows documenting how three-dimensional surfaces respond to changing lighting situations. Our first results show that RTI can capture the material properties of paste prints and their current state of preservation more accurately than conventional photographs, although there are limitations with glossy surfaces because the mathematical models that are included in RTI are kept simple in order to keep the software robust and easy to use. To improve the method, we are currently developing tools for a more detailed analysis and simulation of the reflectance behavior. An enhanced analytical model for the representation and visualization of gloss will increase the significance of digital representations of cultural heritage objects. For collaborative efforts, we are working on a web-based viewer application for RTI images based on WebGL in order to make acquired data accessible to a broader international research community. At the ICDH Conference, we would like to present unpublished results of our work and discuss the implications of our concept for art history, computational photography and heritage science.

Keywords: art history, computational photography, paste prints, reflectance transformation imaging

Procedia PDF Downloads 263
552 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 81
551 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 369
550 Air Pollutants Exposure and Blood High Sensitivity C-Reactive Protein Concentrations in Healthy Pregnant Women

Authors: Gwo-Hwa Wan, Tai-Ho Hung, Fen-Fang Chung, Wan-Ying Lee, Hui-Ching Yang

Abstract:

Air pollutant exposure results in elevated concentrations of oxidative stress and inflammatory biomarkers in general populations. Increased concentrations of inflammatory biomarkers in pregnant women would be associated with preterm labor and low birth weight. To our best knowledge, the associations between air pollutants exposure and inflammation in pregnant women and fetuses are unknown, as well as their effects on fetal growth. This study aimed to evaluate the influences of outdoor air pollutants in northern Taiwan areas on the inflammatory biomarker (high sensitivity C-reactive protein, hs-CRP) concentration in the blood of healthy pregnant women and how the biomarker impacts fetal growth. In this study, 38 healthy pregnant women who are in their first trimester and live in northern Taiwan area were recruited from the Taipei Chang Gung Memorial Hospital. Personal characteristics and prenatal examination data (e.g., blood pressure) were obtained from recruited subjects. The concentrations of inflammatory mediators, hs-CRP, in the blood of healthy pregnant women were analyzed. Additionally, hourly data of air pollutants (PM10, SO2, NO2, O3, CO) concentrations were obtained from air quality monitoring stations in Taipei area, established by the Taiwan Environmental Protection Administration. The definition of lag 0 and lag 01 are the exposure to air pollutants on the day of blood withdrawal, and the average exposure to air pollutants one day before and on the day of blood withdrawal, respectively. The statistical analyses were conducted using SPSS software version 22.0 (SPSS, Inc., Chicago, IL, USA). This analytical result indicates that the healthy pregnant women aged between 28 and 42 years old. The body mass index before pregnancy averaged 21.51 (sd = 2.51) kg/m2. Around 90% of the pregnant women had never smoking habit, and 28.95% of them had allergic diseases. Approximately around 84% and 5.26% of the pregnant women worked at indoor and outdoor environments, respectively. The mean hematocrit level of the pregnant women was 37.10%, and the hemoglobin levels were ranged between 10.1 and 14.7 g/dL with 12.47 g/dL of mean value. The blood hs-CRP concentrations of healthy pregnant women in the first trimester ranged between 0.32 and 32.5 mg/L with 2.83 (sd = 5.69) mg/L of mean value. The blood hs-CRP concentrations were positively associated with ozone concentrations at lag 0-14 (r = 0.481, p = 0.017) in healthy pregnant women. Significant lag effects were identified in ozone at lag 0-14 with a positive excess concentration of blood hs-CRP.

Keywords: air pollutant, hs-CRP, pregnant woman, ozone, first trimester

Procedia PDF Downloads 238
549 Promoting Effective Institutional Governance in Cameroon Higher Education: A Governance Equalizer Perspective

Authors: Jean Patrick Mve

Abstract:

The increasing quest for efficiency, accountability, and transparency has led to the implementation of massive governance reforms among higher education systems worldwide. This is causing many changes in the governance of higher education institutions. Governments over the world are trying to adopt business-like organizational strategies to enhance the performance of higher education institutions. This study explores the changes that have taken place in the Cameroonian higher education sector. It also attempts to draw a picture of the likely future of higher education governance and the actions to be taken for the promotion of institutional effectiveness among higher education institutions. The “governance equalizer” is used as an analytical tool to this end. It covers the five dimensions of the New Public Management (NPM), namely: state regulation, stakeholder guidance, academic self-governance, managerial self-governance, and competition. Qualitative data are used, including semi-structured interviews with key informants at the organizational level and other academic stakeholders, documents and archival data from the university and from the ministry of higher education. It has been found that state regulation among higher education institutions in Cameroon is excessively high, causing the institutional autonomy to be very low, especially at the level of financial management, staffing and promotion, and other internal administrative affairs; at the level of stakeholder guidance there is a higher degree of stakeholders consideration in the academic and research activities among universities, though the government’s interest to keep its hands in most management activities is still high; academic self-governance is also very weak as the assignment of academics is done more on the basis of political considerations than competence; there is no real managerial self-governance among higher education institutions due to the lack of institutional capacity and insufficient autonomy at the level of decision making; there is a plan to promote competition among universities but a real competitive environment is not yet put into place. The study concludes that the government’s policy should make state control more relaxed and concentrate on steering and supervision. As well, real institutional autonomy, professional competence building for top management and stakeholder participation should be considered to guarantee competition and institutional effectiveness.

Keywords: Cameroon higher education, effective institutional governance, governance equalizer, institutional autonomy, institutional effectiveness

Procedia PDF Downloads 129
548 Use of Coconut Shell as a Replacement of Normal Aggregates in Rigid Pavements

Authors: Prakash Parasivamurthy, Vivek Rama Das, Ravikant Talluri, Veena Jawali

Abstract:

India ranks among third in the production of coconut besides Philippines and Indonesia. About 92% of the total production in the country is contributed from four southern states especially, Kerala (45.22%), Tamil Nadu (26.56%), Karnataka (10.85%), and Andhra Pradesh (8.93%). Other states, such as Goa, Maharashtra, Odisha, West Bengal, and those in the northeast (Tripura and Assam) account for the remaining 8.44%. The use of coconut shell as coarse aggregate in concrete has never been a usual practice in the industry, particularly in areas where light weight concrete is required for non-load bearing walls, non-structural floors, and strip footings. The high cost of conventional building materials is a major factor affecting construction delivery in India. In India, where abundant agricultural and industrial wastes are discharged, these wastes can be used as potential material or replacement material in the construction industry. This will have double the advantages viz., reduction in the cost of construction material and also as a means of disposal of wastes. Therefore, an attempt has been made in this study to utilize the coconut shell (CS) as coarse aggregate in rigid pavement. The present study was initiated with the characterization of materials by the basic material testing. The casted moulds are cured and tests are conducted for hardened concrete. The procedure is continued with determination of fck (Characteristic strength), E (Modulus of Elasticity) and µ (Poisson Value) by the test results obtained. For the analytical studies, rigid pavement was modeled by the KEN PAVE software, finite element software developed specially for road pavements and simultaneously design of rigid pavement was carried out with Indian standards. Results show that physical properties of CSAC (Coconut Shell Aggregate Concrete) with 10% replacement gives better results. The flexural strength of CSAC is found to increase by 4.25% as compared to control concrete. About 13 % reduction in pavement thickness is observed using optimum coconut shell.

Keywords: coconut shell, rigid pavement, modulus of elasticity, poison ratio

Procedia PDF Downloads 222
547 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 54
546 Empirical Testing of Hofstede’s Measures of National Culture: A Study in Four Countries

Authors: Nebojša Janićijević

Abstract:

At the end of 1970s, Dutch researcher Geert Hofstede, had conducted an enormous empirical research on the differences between national cultures. In his huge research, he had identified four dimensions of national culture according to which national cultures differ and determined the index for every dimension of national culture for each country that took part in his research. The index showed a country’s position on the continuum between the two extreme poles of the cultural dimensions. Since more than 40 years have passed since Hofstede's research, there is a doubt that, due to the changes in national cultures during that period, they are no longer a good basis for research. The aim of this research is to check the validity of Hofstee's indices of national culture The empirical study conducted in the branches of a multinational company in Serbia, France, the Netherlands and Denmark aimed to determine whether Hofstede’s measures of national culture dimensions are still valid. The sample consisted of 155 employees of one multinational company, where 40 employees came from three countries and 35 employees were from Serbia. The questionnaire that analyzed the positions of national cultures according to the Hofstede’s four dimensions was formulated on the basis of the initial Hofstede’s questionnaire, but it was much shorter and significantly simplified comparing to the original questionnaire. Such instrument had already been used in earlier researches. A statistical analysis of the obtained questionnaire results was done by a simple calculation of the frequency of the provided answers. Due to the limitations in methodology, sample size, instrument, and applied statistical methods, the aim of the study was not to explicitly test the accuracy Hofstede’s indexes but to enlighten the general position of the four observed countries in national culture dimensions and their mutual relations. The study results have indicated that the position of the four observed national cultures (Serbia, France, the Netherlands and Denmark) is precisely the same in three out of four dimensions as Hofstede had described in his research. Furthermore, the differences between national cultures and the relative relations between their positions in three dimensions of national culture correspond to Hofstede’s results. The only deviation from Hofstede’s results is concentrated around the masculinity–femininity dimension. In addition, the study revealed that the degree of power distance is a determinant when choosing leadership style. It has been found that national cultures with high power distance, like Serbia and France, favor one of the two authoritative leadership styles. On the other hand, countries with low power distance, such as the Netherlands and Denmark, prefer one of the forms of democratic leadership styles. This confirms Hofstede’s premises about the impact of power distance on leadership style. The key contribution of the study is that Hofstede’s national culture indexes are still a reliable tool for measuring the positions of countries in national culture dimensions, and they can be applied in the cross-cultural research in management. That was at least the case with four observed countries: Serbia, France, the Netherlands, and Denmark.

Keywords: national culture, leadership styles, power distance, collectivism, masculinity, uncertainty avoidance

Procedia PDF Downloads 51
545 Extraction and Quantification of Triclosan in Wastewater Samples Using Molecularly Imprinted Membrane Adsorbent

Authors: Siyabonga Aubrey Mhlongo, Linda Lunga Sibali, Phumlane Selby Mdluli, Peter Papoh Ndibewu, Kholofelo Clifford Malematja

Abstract:

This paper reports on the successful extraction and quantification of an antibacterial and antifungal agent present in some consumer products (Triclosan: C₁₂H₇Cl₃O₂)generally found in wastewater or effluents using molecularly imprinted membrane adsorbent (MIMs) followed by quantification and removal on a high-performance liquid chromatography (HPLC). Triclosan is an antibacterial and antifungal agent present in some consumer products like toothpaste, soaps, detergents, toys, and surgical cleaning treatments. The MIMs was fabricated usingpolyvinylidene fluoride (PVDF) polymer with selective micro composite particles known as molecularly imprinted polymers (MIPs)via a phase inversion by immersion precipitation technique. This resulted in an improved hydrophilicity and mechanical behaviour of the membranes. Wastewater samples were collected from the Umbogintwini Industrial Complex (UIC) (south coast of Durban, KwaZulu-Natal in South Africa). central UIC effluent treatment plant and pre-treated before analysis. Experimental parameters such as sample size, contact time, stirring speed were optimised. The resultant MIMs had an adsorption efficiency of 97% of TCS with reference to NIMs and bare membrane, which had 92%, 88%, respectively. The analytical method utilized in this review had limits of detection (LoD) and limits of quantification (LoQ) of 0.22, 0.71µgL-1 in wastewater effluent, respectively. The percentage recovery for the effluent samples was 68%. The detection of TCS was monitored for 10 consecutive days, where optimum TCS traces detected in the treated wastewater was 55.0μg/L inday 9 of the monitored days, while the lowest detected was 6.0μg/L. As the concentrations of analytefound in effluent water samples were not so diverse, this study suggested that MIMs could be the best potential adsorbent for the development and continuous progress in membrane technologyand environmental sciences, lending its capability to desalination.

Keywords: molecularly imprinted membrane, triclosan, phase inversion, wastewater

Procedia PDF Downloads 101
544 Development and Validation of a Liquid Chromatographic Method for the Quantification of Related Substance in Gentamicin Drug Substances

Authors: Sofiqul Islam, V. Murugan, Prema Kumari, Hari

Abstract:

Gentamicin is a broad spectrum water-soluble aminoglycoside antibiotics produced by the fermentation process of microorganism known as Micromonospora purpurea. It is widely used for the treatment of infection caused by both gram positive and gram negative bacteria. Gentamicin consists of a mixture of aminoglycoside components like C1, C1a, C2a, and C2. The molecular structure of Gentamicin and its related substances showed that it has lack of presence of chromophore group in the molecule due to which the detection of such components were quite critical and challenging. In this study, a simple Reversed Phase-High Performance Liquid Chromatographic (RP-HPLC) method using ultraviolet (UV) detector was developed and validated for quantification of the related substances present in Gentamicin drug substances. The method was achieved by using Thermo Scientific Hypersil Gold analytical column (150 x 4.6 mm, 5 µm particle size) with isocratic elution composed of methanol: water: glacial acetic acid: sodium hexane sulfonate in the ratio 70:25:5:3 % v/v/v/w as a mobile phase at a flow rate of 0.5 mL/min, column temperature was maintained at 30 °C and detection wavelength of 330 nm. The four components of Gentamicin namely Gentamicin C1, C1a, C2a, and C2 were well separated along with the related substance present in Gentamicin. The Limit of Quantification (LOQ) values were found to be at 0.0075 mg/mL. The accuracy of the method was quite satisfactory in which the % recovery was resulted between 95-105% for the related substances. The correlation coefficient (≥ 0.995) shows the linearity response against concentration over the range of Limit of Quantification (LOQ). Precision studies showed the % Relative Standard Deviation (RSD) values less than 5% for its related substance. The method was validated in accordance with the International Conference of Harmonization (ICH) guideline with various parameters like system suitability, specificity, precision, linearity, accuracy, limit of quantification, and robustness. This proposed method was easy and suitable for use for the quantification of related substances in routine analysis of Gentamicin formulations.

Keywords: reversed phase-high performance liquid chromatographic (RP-HPLC), high performance liquid chromatography, gentamicin, isocratic, ultraviolet

Procedia PDF Downloads 144
543 The Relationship between Functional Movement Screening Test and Prevalence of Musculoskeletal Disorders in Emergency Nurse and Emergency Medical Services Staff Shiraz, Iran, 2017

Authors: Akram Sadat Jafari Roodbandi, Alireza Choobineh, Nazanin Hosseini, Vafa Feyzi

Abstract:

Introduction: Physical fitness and optimum functional movement are essential for efficiently performing job tasks without fatigue and injury. Functional Movement Screening (FMS) tests are used in screening of athletes and military forces. Nurses and emergency medical staff are obliged to perform many physical activities such as transporting patients, CPR operations, etc. due to the nature of their jobs. This study aimed to assess relationship between FMS test score and the prevalence of musculoskeletal disorders (MSDs) in emergency nurses and emergency medical services (EMS) staff. Methods: 134 male and female emergency nurses and EMS technicians participated in this cross-sectional, descriptive-analytical study. After video tutorial and practical training of how to do FMS test, the participants carried out the test while they were wearing comfortable clothes. The final score of the FMS test ranges from 0 to 21. The score of 14 is considered weak in the functional movement base on FMS test protocol. In addition to the demographic data questionnaire, the Nordic musculoskeletal questionnaire was also completed for each participant. SPSS software was used for statistical analysis with a significance level of 0.05. Results: Totally, 49.3% (n=66) of the subjects were female. The mean age and work experience of the subjects were 35.3 ± 8.7 and 11.4 ± 7.7, respectively. The highest prevalence of MSDs was observed at the knee and lower back with 32.8% (n=44) and 23.1% (n=31), respectively. 26 (19.4%) health worker had FMS test score of 14 and less. The results of the Spearman correlation test showed that the FMS test score was significantly associated with MSDs (r=-0.419, p < 0.0001). It meant that MSDs increased with the decrease of the FMS test score. Age, sex, and MSDs were the remaining significant factors in linear regression logistic model with dependent variable of FMS test score. Conclusion: FMS test seems to be a usable screening tool in pre-employment and periodic medical tests for occupations that require physical fitness and optimum functional movements.

Keywords: functional movement, musculoskeletal disorders, health care worker, screening test

Procedia PDF Downloads 112
542 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 50
541 Production of Pre-Reduction of Iron Ore Nuggets with Lesser Sulphur Intake by Devolatisation of Boiler Grade Coal

Authors: Chanchal Biswas, Anrin Bhattacharyya, Gopes Chandra Das, Mahua Ghosh Chaudhuri, Rajib Dey

Abstract:

Boiler coals with low fixed carbon and higher ash content have always challenged the metallurgists to develop a suitable method for their utilization. In the present study, an attempt is made to establish an energy effective method for the reduction of iron ore fines in the form of nuggets by using ‘Syngas’. By devolatisation (expulsion of volatile matter by applying heat) of boiler coal, gaseous product (enriched with reducing agents like CO, CO2, H2, and CH4 gases) is generated. Iron ore nuggets are reduced by this syngas. For that reason, there is no direct contact between iron ore nuggets and coal ash. It helps to control the minimization of the sulphur intake of the reduced nuggets. A laboratory scale devolatisation furnace designed with reduction facility is evaluated after in-depth studies and exhaustive experimentations including thermo-gravimetric (TG-DTA) analysis to find out the volatile fraction present in boiler grade coal, gas chromatography (GC) to find out syngas composition in different temperature and furnace temperature gradient measurements to minimize the furnace cost by applying one heating coil. The nuggets are reduced in the devolatisation furnace at three different temperatures and three different times. The pre-reduced nuggets are subjected to analytical weight loss calculations to evaluate the extent of reduction. The phase and surface morphology analysis of pre-reduced samples are characterized using X-ray diffractometry (XRD), energy dispersive x-ray spectrometry (EDX), scanning electron microscopy (SEM), carbon sulphur analyzer and chemical analysis method. Degree of metallization of the reduced nuggets is 78.9% by using boiler grade coal. The pre-reduced nuggets with lesser sulphur content could be used in the blast furnace as raw materials or coolant which would reduce the high quality of coke rate of the furnace due to its pre-reduced character. These can be used in Basic Oxygen Furnace (BOF) as coolant also.

Keywords: alternative ironmaking, coal gasification, extent of reduction, nugget making, syngas based DRI, solid state reduction

Procedia PDF Downloads 247
540 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention

Authors: Thomas Larm, Anna Wahlsten

Abstract:

An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.

Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch

Procedia PDF Downloads 69
539 The Role of Institutional Quality and Institutional Quality Distance on Trade: The Case of Agricultural Trade within the Southern African Development Community Region

Authors: Kgolagano Mpejane

Abstract:

The study applies a New Institutional Economics (NIE) analytical framework to trade in developing economies by assessing the impacts of institutional quality and institutional quality distance on agricultural trade using a panel data of 15 Southern African Development Community (SADC) countries from the years 1991-2010. The issue of institutions on agricultural trade has not been accorded the necessary attention in the literature, particularly in developing economies. Therefore, the paper empirically tests the gravity model of international trade by measuring the impact of political, economic and legal institutions on intra SADC agricultural trade. The gravity model is noted for its exploratory power and strong theoretical foundation. However, the model has statistical shortcomings in dealing with zero trade values and heteroscedasticity residuals leading to biased results. Therefore, this study employs a two stage Heckman selection model with a Probit equation to estimate the influence of institutions on agricultural trade. The selection stages include the inverse Mills ratio to account for the variable bias of the gravity model. The Heckman model accounts for zero trade values and is robust in the presence of heteroscedasticity. The empirical results of the study support the NIE theory premise that institutions matter in trade. The results demonstrate that institutions determine bilateral agricultural trade on different margins with political institutions having positive and significant influence on bilateral agricultural trade flows within the SADC region. Legal and economic institutions have significant and negative effects on SADC trade. Furthermore, the results of this study confirm that institutional quality distance influences agricultural trade. Legal and political institutional distance have a positive and significant influence on bilateral agricultural trade while the influence of economic, institutional quality is negative and insignificant. The results imply that nontrade barriers, in the form of institutional quality and institutional quality distance, are significant factors limiting intra SADC agricultural trade. Therefore, gains from intra SADC agricultural trade can be attained through the improvement of institutions within the region.

Keywords: agricultural trade, institutions, gravity model, SADC

Procedia PDF Downloads 135
538 Design Forms Urban Space

Authors: Amir Shouri, Fereshteh Tabe

Abstract:

Thoughtful and sequential design strategies will shape the future of human being’s lifestyle. Design, as a product, either being for small furniture on sidewalk or a multi-story structure in urban scale, will be important in creating the sense of quality for citizens of a city. Technology besides economy has played a major role in improving design process and increasing awareness of clients about the character of their required design product. Architects along with other design professionals benefited from improvements in aesthetics and technology in building industry. Accordingly, the expectation platforms of people about the quality of habitable space have risen. However, the question is if the quality of architectural design product has increased with the same speed as technology and client’s expectations. Is it behind or a head of technological and economical improvements? This study will work on developing a model of planning for New York City, from the past to present to future. The role of thoughtful thinking at design stage regardless of where or when it is for; may result in a positive or negative aspect. However, considering design objectives based on the need of human being may help in developing a successful design plan. Technology, economy, culture and people’s support may be other parameters in designing a good product. ‘Design Forms Urban Space’ is going to be done in an analytical, qualitative and quantitative work frame, where it will study cases from all over the world and their achievements compared to New York City’s development. Technology, Organic Design, Materiality, Urban forms, city politics and sustainability will be discussed in different cases in international scale. From design professional’s interest in doing a high quality work for a particular answer to importance of being a follower, the ‘Zero-Carbon City’ in Persian Gulf to ‘Polluted City’ in China, from ‘Urban Scale Furniture’ in cities to ‘Seasonal installations’ of a Megacity, will all be studied with references and detailed look to analysis of each case in order to propose the most resourceful, practical and realistic solutions to questions on ‘A Good Design in a City’, ‘New City Planning and social activities’ and ‘New Strategic Architecture for better Cities’.

Keywords: design quality, urban scale, active city, city installations, architecture for better cities

Procedia PDF Downloads 327
537 Investigation of the Technological Demonstrator 14x B in Different Angle of Attack in Hypersonic Velocity

Authors: Victor Alves Barros Galvão, Israel Da Silveira Rego, Antonio Carlos Oliveira, Paulo Gilberto De Paula Toro

Abstract:

The Brazilian hypersonic aerospace vehicle 14-X B, VHA 14-X B, is a vehicle integrated with the hypersonic airbreathing propulsion system based on supersonic combustion (scramjet), developing in Aerothermodynamics and hypersonic Prof. Henry T. Nagamatsu Laboratory, to conduct demonstration in atmospheric flight at the speed corresponding to Mach number 7 at an altitude of 30km. In the experimental procedure the hypersonic shock tunnel T3 was used, installed in that laboratory. This device simulates the flow over a model is fixed in the test section and can also simulate different atmospheric conditions. The scramjet technology offers substantial advantages to improve aerospace vehicle performance which flies at a hypersonic speed through the Earth's atmosphere by reducing fuel consumption on board. Basically, the scramjet is an aspirated aircraft engine fully integrated that uses oblique/conic shock waves generated during hypersonic flight, to promote the deceleration and compression of atmospheric air in scramjet inlet. During the hypersonic flight, the vehicle VHA 14-X will suffer atmospheric influences, promoting changes in the vehicle's angles of attack (angle that the mean line of vehicle makes with respect to the direction of the flow). Based on this information, a study is conducted to analyze the influences of changes in the vehicle's angle of attack during the atmospheric flight. Analytical theoretical analysis, simulation computational fluid dynamics and experimental investigation are the methodologies used to design a technological demonstrator prior to the flight in the atmosphere. This paper considers analysis of the thermodynamic properties (pressure, temperature, density, sound velocity) in lower surface of the VHA 14-X B. Also, it considers air as an ideal gas and chemical equilibrium, with and without boundary layer, considering changes in the vehicle's angle of attack (positive and negative in relation to the flow) and bi-dimensional expansion wave theory at the expansion section (Theory of Prandtl-Meyer).

Keywords: angle of attack, experimental hypersonic, hypersonic airbreathing propulsion, Scramjet

Procedia PDF Downloads 387
536 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single

Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa

Abstract:

Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600  minimum load impedance of the DAQ card with the 5 to 20  low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.

Keywords: flux density, electrical steel, LabVIEW, magnetization

Procedia PDF Downloads 276
535 Impact of Environmental Rule of Law towards Positive Environmental Outcomes in Nigeria

Authors: Kate N. Okeke

Abstract:

The ever-growing needs of man requiring satisfaction have pushed him strongly towards industrialization which has and is still leaving environmental degradation and its attendant negative impacts in its wake. It is, therefore, not surprising that the enjoyment of fundamental rights like food supply, security of lives and property, freedom of worship, health and education have been drastically affected by such degradation. In recognition of the imperative need to protect the environment and human rights, many global instruments and constitutions have recognized the right to a healthy and sustainable environment. Some environmental advocates and quite a number of literatures on the subject matter call for the recognition of environmental rights via rule of law as a vital means of achieving positive outcomes on the subject matter. However, although there are numerous countries with constitutional environmental provisions, most of them such as Nigeria, have shown poor environmental performance. A notable problem is the fact that the constitution which recognizes environmental rights appears in its other provisions to contradict its provisions by making enforceability of the environmental rights unattainable. While adopting a descriptive, analytical, comparative and explanatory study design in reviewing a successful positive environmental outcome via the rule of law, this article argues that rule of law on a balance of scale, weighs more than just environmental rights recognition and therefore should receive more attention by environmental lawyers and advocates. This is because with rule of law, members of a society are sure of getting the most out of the environmental rights existing in their legal system. Members of Niger-Delta communities of Nigeria will benefit from the environmental rights existing in Nigeria. They are exposed to environmental degradation and pollution with effects such as acidic rainfall, pollution of farmlands and clean water sources. These and many more are consequences of oil and gas exploration. It will also pave way for solving the violence between cattle herdsmen and farmers in the Middle Belt and other regions of Nigeria. Their clashes are over natural resource control. Having seen that environmental rule of law is vital to sustainable development, this paper aims to contribute to discussions on how best the vehicle of rule law can be driven towards achieving positive environmental outcomes. This will be in reliance on other enforceable provisions in the Nigerian Constitution. Other domesticated international instruments will also be considered to attain sustainable environment and development.

Keywords: environment, rule of law, constitution, sustainability

Procedia PDF Downloads 133
534 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite

Authors: Georgios Koronis, Arlindo Silva

Abstract:

This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.

Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites

Procedia PDF Downloads 188
533 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)

Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich

Abstract:

The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.

Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian

Procedia PDF Downloads 280
532 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 150
531 The Desire for Significance & Memorability in Popular Culture: A Cognitive Psychological Study of Contemporary Literature, Art, and Media

Authors: Israel B. Bitton

Abstract:

“Memory” is associated with various phenomena, from physical to mental, personal to collective and historical to cultural. As part of a broader exploration of memory studies in philosophy and science (slated for academic publication October 2021), this specific study employs analytical methods of cognitive psychology and philosophy of memory to theorize that A) the primary human will (drive) is to significance, in that every human action and expression can be rooted in a most primal desire to be cosmically significant (however that is individually perceived); and B) that the will to significance manifests as the will to memorability, an innate desire to be remembered by others after death. In support of these broad claims, a review of various popular culture “touchpoints”—historic and contemporary records spanning literature, film and television, traditional news media, and social media—is presented to demonstrate how this very theory is repeatedly and commonly expressed (and has been for a long time) by many popular public figures as well as “everyday people.” Though developed before COVID, the crisis only increased the theory’s relevance: so many people were forced to die alone, leaving them and their loved ones to face even greater existential angst than what ordinarily accompanies death since the usual expectations for one’s “final moments” were shattered. To underscore this issue of, and response to, what can be considered a sociocultural “memory gap,” this study concludes with a summary of several projects launched by journalists at the height of the pandemic to document the memorable human stories behind COVID’s tragic warped speed death toll that, when analyzed through the lens of Viktor E. Frankl’s psychoanalytical perspective on “existential meaning,” shows how countless individuals were robbed of the last wills and testaments to their self-significance and memorability typically afforded to the dying and the aggrieved. The resulting insight ought to inform how government and public health officials determine what is truly “non-essential” to human health, physical and mental, at times of crisis.

Keywords: cognitive psychology, covid, neuroscience, philosophy of memory

Procedia PDF Downloads 167
530 Rheolaser: Light Scattering Characterization of Viscoelastic Properties of Hair Cosmetics That Are Related to Performance and Stability of the Respective Colloidal Soft Materials

Authors: Heitor Oliveira, Gabriele De-Waal, Juergen Schmenger, Lynsey Godfrey, Tibor Kovacs

Abstract:

Rheolaser MASTER™ makes use of multiple scattering of light, caused by scattering objects in a continuous medium (such as droplets and particles in colloids), to characterize the viscoelasticity of soft materials. It offers an alternative to conventional rheometers to characterize viscoelasticity of products such as hair cosmetics. Up to six simultaneous measurements at controlled temperature can be carried out simultaneously (10-15 min), and the method requires only minor sample preparation work. Conversely to conventional rheometer based methods, no mechanical stress is applied to the material during the measurements. Therefore, the properties of the exact same sample can be monitored over time, like in aging and stability studies. We determined the elastic index (EI) of water/emulsion mixtures (1 ≤ fat alcohols (FA) ≤ 5 wt%) and emulsion/gel-network mixtures (8 ≤ FA ≤ 17 wt%) and compared with the elastic/sorage mudulus (G’) for the respective samples using a TA conventional rheometer with flat plates geometry. As expected, it was found that log(EI) vs log(G’) presents a linear behavior. Moreover, log(EI) increased in a linear fashion with solids level in the entire range of compositions (1 ≤ FA ≤ 17 wt%), while rheometer measurements were limited to samples down to 4 wt% solids level. Alternatively, a concentric cilinder geometry would be required for more diluted samples (FA > 4 wt%) and rheometer results from different sample holder geometries are not comparable. The plot of the rheolaser output parameters solid-liquid balance (SLB) vs EI were suitable to monitor product aging processes. These data could quantitatively describe some observations such as formation of lumps over aging time. Moreover, this method allowed to identify that the different specifications of a key raw material (RM < 0.4 wt%) in the respective gel-network (GN) product has minor impact on product viscoelastic properties and it is not consumer perceivable after a short aging time. Broadening of a RM spec range typically has a positive impact on cost savings. Last but not least, the photon path length (λ*)—proportional to droplet size and inversely proportional to volume fraction of scattering objects, accordingly to the Mie theory—and the EI were suitable to characterize product destabilization processes (e.g., coalescence and creaming) and to predict product stability about eight times faster than our standard methods. Using these parameters we could successfully identify formulation and process parameters that resulted in unstable products. In conclusion, Rheolaser allows quick and reliable characterization of viscoelastic properties of hair cosmetics that are related to their performance and stability. It operates in a broad range of product compositions and has applications spanning from the formulation of our hair cosmetics to fast release criteria in our production sites. Last but not least, this powerful tool has positive impact on R&D development time—faster delivery of new products to the market—and consequently on cost savings.

Keywords: colloids, hair cosmetics, light scattering, performance and stability, soft materials, viscoelastic properties

Procedia PDF Downloads 153
529 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 109
528 High Heating Value Bio-Chars from a Bio-Oil Upgrading Process

Authors: Julius K. Gane, Mohamad N. Nahil, Paul T. Williams

Abstract:

In today’s world of rapid population growth and a changing climate, one way to mitigate various negative effects is via renewable energy solutions. Energy and power as basic requirements in almost all human endeavours are also the banes of the changing climate and the impacts thereof. Thus it is crucial to develop innovative and environmentally friendly energy options to ameliorate various negative repercussions. Upgrading of fast pyrolysis bio-oil via hydro-treatment offers such opportunities, as quality renewable liquid transportation fuels can be produced. The process, however, is typically accompanied by bio-char formation as a by-product. The goal of this work was to study the yield and some properties of bio-chars formed from a hydrotreatment process, with an overall aim to promote the valuable utilization of wastes or by-products from renewable energy technologies. It is assumed that bio-chars that have comparable energy contents with coals will be more desirable as solid energy materials due to renewability and environmental friendliness. Therefore, the analytical work in this study focused mainly on determining the higher heating value (HHV) of the chars. The method involved the reaction of bio-oil in an autoclave supplied by the Parr Instrument Company, IL, USA. Two main parameters (different temperatures and resident times) were investigated. The chars were characterized using a Thermo EA2000 CHNS analyser, then oxygen contents and HHVs computed based on the literature. From the results, these bio-chars can readily serve as feedstocks for the production of renewable solid fuels. Their HHVs ranged between 29.26-39.18 MJ/kg, affected by different temperatures and retention times. There was an inverse relationship between the oxygen content and the HHVs of the chars. It can, therefore, be concluded that it is possible to optimize the process efficiency of the hydrotreatment process used through the production of renewable energy materials from the 'waste’ char by-products. Future work should consider developing a suitable balance between the primary objective of bio-oil upgrading processes (which is to improve the quality of the liquid fuels) and the conversion of its solid wastes into value-added products such as smokeless briquettes.

Keywords: bio-char, renewable solid biofuels, valorisation, waste-to-energy

Procedia PDF Downloads 109
527 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 354
526 Impact of Import Restriction on Rice Production in Nigeria

Authors: C. O. Igberi, M. U. Amadi

Abstract:

This research paper on the impact of import restriction on rice production in Nigeria is aimed at finding/proffering valid solutions to the age long problem of rice self-sufficiency, through a better understanding of policy measures used in the past, in this case, the effectiveness of rice import restriction of the early 90’s. It tries to answer the questions of; import restriction boosting domestic rice production and the macroeconomic determining factors of Gross Domestic Rice Product (GDRP). The research probe is investigated through literature and analytical frameworks, such that time series data on the GDRP, Gross Fixed Capital Formation (GFCF), average foreign rice producers’ prices(PPF), domestic producers’ prices (PPN) and the labour force (LABF) are collated for analysis (with an import restriction dummy variable, POL1). The research objectives/hypothesis are analysed using; Cointegration, Vector Error Correction Model (VECM), Impulse Response Function (IRF) and Granger Causality Test(GCT) methodologies. Results show that in the short-run error correction specification for GDRP, a percentage (1%) deviation away from the long-run equilibrium in a current quarter is only corrected by 0.14% in the subsequent quarter. Also, the rice import restriction policy had no significant effect on the GDRP at this time. Other findings show that the policy period has, in fact, had effects on the PPN and LABF. The choice variables used are valid macroeconomic factors that explain the GDRP of Nigeria, as adduced from the IRF and GCT, and in the long-run. Policy recommendations suggest that the import restriction is not disqualified as a veritable tool for improving domestic rice production, rather better enforcement procedures and strict adherence to the policy dictates is needed. Furthermore, accompanying policies which drive public and private capital investment and accumulation must be introduced. Also, employment rate and labour substitution in the agricultural sector should not be drastically changed, rather its welfare and efficiency be improved.

Keywords: import restriction, gross domestic rice production, cointegration, VECM, Granger causality, impulse response function

Procedia PDF Downloads 183