Search results for: channel estimation
462 Dynamic Externalities and Regional Productivity Growth: Evidence from Manufacturing Industries of India and China
Authors: Veerpal Kaur
Abstract:
The present paper aims at investigating the role of dynamic externalities of agglomeration in the regional productivity growth of manufacturing sector in India and China. Taking 2-digit level manufacturing sector data of states and provinces of India and China respectively for the period of 1998-99 to 2011-12, this paper examines the effect of dynamic externalities namely – Marshall-Arrow-Romer (MAR) specialization externalities, Jacobs’s diversity externalities, and Porter’s competition externalities on regional total factor productivity growth (TFPG) of manufacturing sector in both economies. Regressions have been carried on pooled data for all 2-digit manufacturing industries for India and China separately. The estimation of Panel has been based on a fixed effect by sector model. The results of econometric exercise show that labour-intensive industries in Indian regional manufacturing benefit from diversity externalities and capital intensive industries gain more from specialization in terms of TFPG. In China, diversity externalities and competition externalities hold better prospectus for regional TFPG in both labour intensive and capital intensive industries. But if we look at results for coastal and non-coastal region separately, specialization tends to assert a positive effect on TFPG in coastal regions whereas it has a negative effect on TFPG of coastal regions. Competition externalities put a negative effect on TFPG of non-coastal regions whereas it has a positive effect on TFPG of coastal regions. Diversity externalities made a positive contribution to TFPG in both coastal and non-coastal regions. So the results of the study postulate that the importance of dynamic externalities should not be examined by pooling all industries and all regions together. This could hold differential implications for region specific and industry-specific policy formulation. Other important variables explaining regional level TFPG in both India and China have been the availability of infrastructure, level of competitiveness, foreign direct investment, exports and geographical location of the region (especially in China).Keywords: China, dynamic externalities, India, manufacturing, productivity
Procedia PDF Downloads 123461 Predicting the Turbulence Intensity, Excess Energy Available and Potential Power Generated by Building Mounted Wind Turbines over Four Major UK City
Authors: Emejeamara Francis
Abstract:
The future of potentials wind energy applications within suburban/urban areas are currently faced with various problems. These include insufficient assessment of urban wind resource, and the effectiveness of commercial gust control solutions as well as unavailability of effective and cheaper valuable tools for scoping the potentials of urban wind applications within built-up environments. In order to achieve effective assessment of the potentials of urban wind installations, an estimation of the total energy that would be available to them were effective control systems to be used, and evaluating the potential power to be generated by the wind system is required. This paper presents a methodology of predicting the power generated by a wind system operating within an urban wind resource. This method was developed by using high temporal resolution wind measurements from eight potential sites within the urban and suburban environment as inputs to a vertical axis wind turbine multiple stream tube model. A relationship between the unsteady performance coefficient obtained from the stream tube model results and turbulence intensity was demonstrated. Hence, an analytical methodology for estimating the unsteady power coefficient at a potential turbine site is proposed. This is combined with analytical models that were developed to predict the wind speed and the excess energy (EEC) available in estimating the potential power generated by wind systems at different heights within a built environment. Estimates of turbulence intensities, wind speed, EEC and turbine performance based on the current methodology allow a more complete assessment of available wind resource and potential urban wind projects. This methodology is applied to four major UK cities namely Leeds, Manchester, London and Edinburgh and the potential to map the turbine performance at different heights within a typical urban city is demonstrated.Keywords: small-scale wind, turbine power, urban wind energy, turbulence intensity, excess energy content
Procedia PDF Downloads 277460 Prevalence of Elder Abuse and Effects of Social Factors on It
Authors: Ezat Vahidian, Babak Eshrati
Abstract:
Introduction: Elder abuse, a very complex issue with diverse definitions and names, has been very slow to capture the public eye and public policy since it is manifested at many levels. It requires the involvement of different types of professionals. While elder abuse is not a new phenomenon, the speed of population ageing world-wide is likely to lead to an increase in its incidence and prevalence. Elder abuse has devastating consequences for older persons such as poor quality of life, psychological distress, and loss of property and security. It is also associated with increased mortality and morbidity. Elder abuse is a problem that manifests itself in both rich and poor countries and at all levels of society. Purpose: The purpose of this study is to determine the prevalence of elder abuse and effects of social factor on it in Markazi Province. Materials and methods: The society of the study was all of the elders in Markazi Province that were available by geographical address in the table of rural and urban household societies. The study was cross sectional and multi phases in sampling the first one was classification according rural and urban area and the second one was cluster sampling with equal cluster. Estimation of samples were 472 persons and increased by design effect to 1110 persons. Collection data was done by questionnaire and analyzed by SPSS and chi 2 exam. Results: This study showed 70 persons were abused (42/8% male and 57/2% female) mean of ages was 74/7 years. 64% were marred and 31% were widows. There were not any significant meaningful association between elder abuse and area of living (pv=0.299),occupation (p.v=0.104), education (pv=0.358) and age (P.value=0.104) there were significant meaningful association between physical impairment (pv=0.08), and movement impairment (P.value=0.008). Conclusion: Results verify that maltreatment occurred in the aged persons. Analysis of data indicated that elder abuse exist in every socioeconomic group with any context of education in urban area and rural area and in men and women. Prevalence of elder abuse was 6.3% (70 persons) that verify the data of developed countries with limited sample.Keywords: elder abuse, education, occupation, area of living
Procedia PDF Downloads 403459 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments
Authors: Pia Kastl
Abstract:
Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning
Procedia PDF Downloads 70458 Estimation of Physico-Mechanical Properties of Tuffs (Turkey) from Indirect Methods
Authors: Mustafa Gok, Sair Kahraman, Mustafa Fener
Abstract:
In rock engineering applications, determining uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), and basic index properties such as density, porosity, and water absorption is crucial for the design of both underground and surface structures. However, obtaining reliable samples for direct testing, especially from rocks that weather quickly and have low strength, is often challenging. In such cases, indirect methods provide a practical alternative to estimate the physical and mechanical properties of these rocks. In this study, tuff samples collected from the Cappadocia region (Nevşehir) in Turkey were subjected to indirect testing methods. Over 100 tests were conducted, using needle penetrometer index (NPI), point load strength index (PLI), and disc shear index (BPI) to estimate the uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), density, and water absorption index of the tuffs. The relationships between the results of these indirect tests and the target physical properties were evaluated using simple and multiple regression analyses. The findings of this research reveal strong correlations between the indirect methods and the mechanical properties of the tuffs. Both uniaxial compressive strength and Brazilian tensile strength could be accurately predicted using NPI, PLI, and BPI values. The regression models developed in this study allow for rapid, cost-effective assessments of tuff strength in cases where direct testing is impractical. These results are particularly valuable for geological engineering applications, where time and resource constraints exist. This study highlights the significance of using indirect methods as reliable predictors of the mechanical behavior of weak rocks like tuffs. Further research is recommended to explore the application of these methods to other rock types with similar characteristics. Further research is required to compare the results with those of established direct test methods.Keywords: brazilian tensile strength, disc shear strength, indirect methods, tuffs, uniaxial compressive strength
Procedia PDF Downloads 15457 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques
Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang
Abstract:
Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE
Procedia PDF Downloads 530456 Learning Gains and Constraints Resulting from Haptic Sensory Feedback among Preschoolers' Engagement during Science Experimentation
Authors: Marios Papaevripidou, Yvoni Pavlou, Zacharias Zacharia
Abstract:
Embodied cognition and additional (touch) sensory channel theories indicate that physical manipulation is crucial to learning since it provides, among others, touch sensory input, which is needed for constructing knowledge. Given these theories, the use of Physical Manipulatives (PM) becomes a prerequisite for learning. On the other hand, empirical research on Virtual Manipulatives (VM) (e.g., simulations) learning has provided evidence showing that the use of PM, and thus haptic sensory input, is not always a prerequisite for learning. In order to investigate which means of experimentation, PM or VM, are required for enhancing student science learning at the kindergarten level, an empirical study was conducted that sought to investigate the impact of haptic feedback on the conceptual understanding of pre-school students (n=44, age mean=5,7) in three science domains: beam balance (D1), sinking/floating (D2) and springs (D3). The participants were equally divided in two groups according to the type of manipulatives used (PM: presence of haptic feedback, VM: absence of haptic feedback) during a semi-structured interview for each of the domains. All interviews followed the Predict-Observe-Explain (POE) strategy and consisted of three phases: initial evaluation, experimentation, final evaluation. The data collected through the interviews were analyzed qualitatively (open-coding for identifying students’ ideas in each domain) and quantitatively (use of non-parametric tests). Findings revealed that the haptic feedback enabled students to distinguish heavier to lighter objects when held in hands during experimentation. In D1 the haptic feedback did not differentiate PM and VM students' conceptual understanding of the function of the beam as a mean to compare the mass of objects. In D2 the haptic feedback appeared to have a negative impact on PM students’ learning. Feeling the weight of an object strengthen PM students’ misconception that heavier objects always sink, whereas the scientifically correct idea that the material of an object determines its sinking/floating behavior in the water was found to be significantly higher among the VM students than the PM ones. In D3 the PM students outperformed significantly the VM students with regard to the idea that the heavier an object is the more the spring will expand, indicating that the haptic input experienced by the PM students served as an advantage to their learning. These findings point to the fact that PMs, and thus touch sensory input, might not always be a requirement for science learning and that VMs could be considered, under certain circumstances, as a viable means for experimentation.Keywords: haptic feedback, physical and virtual manipulatives, pre-school science learning, science experimentation
Procedia PDF Downloads 135455 Evaluation of Genetic Fidelity and Phytochemical Profiling of Micropropagated Plants of Cephalantheropsis obcordata: An Endangered Medicinal Orchid
Authors: Gargi Prasad, Ashiho A. Mao, Deepu Vijayan, S. Mandal
Abstract:
The main objective of the present study was to optimize and develop an efficient protocol for in vitro propagation of a medicinally important orchid Cephalantheropsis obcordata (Lindl.) Ormerod along with genetic stability analysis of regenerated plants. This plant has been traditionally used in Chinese folk medicine and the decoction of whole plant is known to possess anticancer activity. Nodal segments used as explants were inoculated on Murashige and Skoog (MS) medium supplemented with various concentrations of isopentenyl adenine (2iP). The rooted plants were successfully acclimatized in the greenhouse with 100% survival rate. Inter-simple sequence repeats (ISSR) markers were used to assess the genetic fidelity of in vitro raised plants and the mother plant. It was revealed that monomorphic bands showing the absence of polymorphism in all in vitro raised plantlets analyzed, confirming the genetic uniformity among the regenerants. Phytochemical analysis was done to compare the antioxidant activities and HPLC fingerprinting assay of 80% aqueous ethanol extract of the leaves and stem of in vitro and in vivo grown C. obcordata. The extracts of the plants were examined for their antioxidant activities by using free radical 1, 1-diphenyl-2-picryl hydrazyl (DPPH) scavenging method, 2,2’-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) radical scavenging ability, reducing power capacity, estimation of total phenolic content, flavonoid content and flavonol content. A simplified method for the detection of ascorbic acid, phenolic acids and flavonoids content was also developed by using reversed phase high-performance liquid chromatography (HPLC). This is the first report on the micropropagation, genetic integrity study and quantitative phytochemical analysis of in vitro regenerated plants of C. obcordata.Keywords: Cephalantheropsis obcordata, genetic fidelity, ISSR markers, HPLC
Procedia PDF Downloads 156454 Comparison of Receiver Operating Characteristic Curve Smoothing Methods
Authors: D. Sigirli
Abstract:
The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve
Procedia PDF Downloads 152453 Tri/Tetra-Block Copolymeric Nanocarriers as a Potential Ocular Delivery System of Lornoxicam: Experimental Design-Based Preparation, in-vitro Characterization and in-vivo Estimation of Transcorneal Permeation
Authors: Alaa Hamed Salama, Rehab Nabil Shamma
Abstract:
Introduction: Polymeric micelles that can deliver drug to intended sites of the eye have attracted much scientific attention recently. The aim of this study was to review the aqueous-based formulation of drug-loaded polymeric micelles that hold significant promise for ophthalmic drug delivery. This study investigated the synergistic performance of mixed polymeric micelles made of linear and branched poly (ethylene oxide)-poly (propylene oxide) for the more effective encapsulation of Lornoxicam (LX) as a hydrophobic model drug. Methods: The co-micellization process of 10% binary systems combining different weight ratios of the highly hydrophilic poloxamers; Synperonic® PE/P84, and Synperonic® PE/F127 and the hydrophobic poloxamine counterpart (Tetronic® T701) was investigated by means of photon correlation spectroscopy and cloud point. The drug-loaded micelles were tested for their solubilizing capacity towards LX. Results: Results showed a sharp solubility increase from 0.46 mg/ml up to more than 4.34 mg/ml, representing about 136-fold increase. Optimized formulation was selected to achieve maximum drug solubilizing power and clarity with lowest possible particle size. The optimized formulation was characterized by 1HNMR analysis which revealed complete encapsulation of the drug within the micelles. Further investigations by histopathological and confocal laser studies revealed the non-irritant nature and good corneal penetrating power of the proposed nano-formulation. Conclusion: LX-loaded polymeric nanomicellar formulation was fabricated allowing easy application of the drug in the form of clear eye drops that do not cause blurred vision or discomfort, thus achieving high patient compliance.Keywords: confocal laser scanning microscopy, Histopathological studies, Lornoxicam, micellar solubilization
Procedia PDF Downloads 449452 Re-Evaluating the Hegemony of English Language in West Africa: A Meta-Analysis Review of the Research, 2003-2018
Authors: Oris Tom-Lawyer, Michael Thomas
Abstract:
This paper seeks to analyse the hegemony of the English language in Western Africa through the lens of educational policies and the socio-economic functions of the language. It is based on the premise that there is a positive link between the English language and development contexts. The study aims to fill a gap in the research literature by examining the usefulness of hegemony as a concept to explain the role of English language in the region, thus countering the negative connotations that often accompany it. The study identified four main research questions: i. What are the socio-economic functions of English in Francophone/lusophone countries? ii. What factors promote the hegemony of English in anglophone countries? iii. To what extent is the hegemony of English in West Africa? iv. What are the implications of the non-hegemony of English in Western Africa? Based on a meta-analysis of the research literature between 2003 and 2018, the findings of the study revealed that in francophone/lusophone countries, English functions in the following socio-economic domains; they are peace keeping missions, regional organisations, commercial and industrial sectors, as an unofficial international language and as a foreign language. The factors that promote linguistic hegemony of English in anglophone countries are English as an official language, a medium of instruction, lingua franca, cultural language, language of politics, language of commerce, channel of development and English for media and entertainment. In addition, the extent of the hegemony of English in West Africa can be viewed from the factors that contribute to the non-hegemony of English in the region; they are French language, Portuguese language, the French culture, neo-colonialism, level of poverty, and economic ties of French to its former colonies. Finally, the implications of the non-hegemony of English language in West Africa are industrial backwardness, poverty rate, lack of social mobility, drop out of school rate, growing interest in English, access to limited internet information and lack of extensive career opportunities. The paper concludes that the hegemony of English has resulted in the development of anglophone countries in Western Africa, while in the francophone/lusophone regions of the continent, industrial backwardness and low literacy rates have been consequences of English language marginalisation. In conclusion, the paper makes several recommendations, including the need for the early introduction of English into French curricula as part of a potential solution.Keywords: developmental tool, English language, linguistic hegemony, West Africa
Procedia PDF Downloads 140451 The Impact of Demographic Profile on Strategic HRM Practices and its Challenges Faced by HR Managers in IT Firm, India: An Empirical Study
Authors: P. Saravanan, A. Vasumathi
Abstract:
Strategic Human Resource Management (SHRM) plays a vital role in formulating the policies and strategies for the company, in order to fulfill the employee’s requirement and to perform the job efficiently within the organisation. Human Resource Management (HRM) functions helps in attracting and motivating the talented workforce for the organisation and by increasing the performance of an individual, will result in achieving the defined goals and objectives for the company. HRM function plays an important role in managing the workers within organisation through a formal communication channel. Since HR functions acts as a mediatory role in between the employee as well as the employers within the organisation that helps in improving the efficacy and skills of the individuals employed within the company. HR manager acts as a change agent, enabling and driving the change management program with respect to business HR functions and its future requirements of the company. Due to change in the business environment, the focus of HR manager is shifting from administrative/personal functions in to a strategic business HR function. HR managers plays a strategic role in managing various HR functions such as recruitment and selection, human resource information system, manpower planning, performance management, conflict management, employee engagement, compensation management, policy formation and retention strategies followed within the industry. Major challenges faced by HR managers at work place are managing the level of engagement for the talented resources within the organisation, reducing the conflicts at workplace, mapping the talented resources through succession planning process, building the effective appraisal process and performance management system and mapping the compensation based on the skills and experience possed by the employee within the company. The authors conducted a study for the sample size of 75 HR managers from an Indian IT company through systematic sampling method. This study identifies that the female employees are facing lesser conflict than the male employees against their managers within the organisation and also the study determines the impact of demographic profile on strategic HRM practices and its challenges faced by HR managers in IT firm, India.Keywords: strategic human resource management, change agent, employee engagement, performance management, succession planning and conflict management
Procedia PDF Downloads 298450 Disaster Management Supported by Unmanned Aerial Systems
Authors: Agoston Restas
Abstract:
Introduction: This paper describes many initiatives and shows also practical examples which happened recently using Unmanned Aerial Systems (UAS) to support disaster management. Since the operation of manned aircraft at disasters is usually not only expensive but often impossible to use as well, in many cases managers fail to use the aerial activity. UAS can be an alternative moreover cost-effective solution for supporting disaster management. Methods: This article uses thematic division of UAS applications; it is based on two key elements, one of them is the time flow of managing disasters, other is its tactical requirements. Logically UAS can be used like pre-disaster activity, activity immediately after the occurrence of a disaster and the activity after the primary disaster elimination. Paper faces different disasters, like dangerous material releases, floods, earthquakes, forest fires and human-induced disasters. Research used function analysis, practical experiments, mathematical formulas, economic analysis and also expert estimation. Author gathered international examples and used own experiences in this field as well. Results and discussion: An earthquake is a rapid escalating disaster, where, many times, there is no other way for a rapid damage assessment than aerial reconnaissance. For special rescue teams, the UAS application can help much in a rapid location selection, where enough place remained to survive for victims. Floods are typical for a slow onset disaster. In contrast, managing floods is a very complex and difficult task. It requires continuous monitoring of dykes, flooded and threatened areas. UAS can help managers largely keeping an area under observation. Forest fires are disasters, where the tactical application of UAS is already well developed. It can be used for fire detection, intervention monitoring and also for post-fire monitoring. In case of nuclear accident or hazardous material leakage, UAS is also a very effective or can be the only one tool for supporting disaster management. Paper shows some efforts using UAS to avoid human-induced disasters in low-income countries as part of health cooperation.Keywords: disaster management, floods, forest fires, Unmanned Aerial Systems
Procedia PDF Downloads 237449 Total Plaque Area in Chronic Renal Failure
Authors: Hernán A. Perez, Luis J. Armando, Néstor H. García
Abstract:
Background and aims Cardiovascular disease rates are very high in patients with renal failure (CRF), but the underlying mechanisms are incompletely understood. Traditional cardiovascular risk factors do not explain the increased risk, and observational studies have observed paradoxical or absent associations between classical risk factors and mortality in dialysis patients. A large randomized controlled trial, the 4D Study, the AURORA and the ALERT study found that statin therapy in CRF do not reduce cardiovascular events. These results may be the results of ‘accelerated atherosclerosis’ observed on these patients. The objective of this study was to investigate if carotid total plaque area (TPA), a measure of carotid plaque burden growth is increased at progressively lower creatinine clearance in patients with CRF. We studied a cohort of patients with CRF not on dialysis, reasoning that risk factor associations might be more easily discerned before end stage renal disease. Methods: The Blossom DMO Argentina ethics committee approved the study and informed consent from each participant was obtained. We performed a cohort study in 412 patients with Stage 1, 2 and 3 CRF. Clinical and laboratory data were obtained. TPA was determined using bilateral carotid ultrasonography. Modification of Diet in Renal Disease estimation formula was used to determine renal function. ANOVA was used when appropriate. Results: Stage 1 CRF group (n= 16, 43±2yo) had a blood pressure of 123±2/78±2 mmHg, BMI 30±1, LDL col 145±10 mg/dl, HbA1c 5.8±0.4% and had the lowest TPA 25.8±6.9 mm2. Stage 2 CRF (n=231, 50±1 yo) had a blood pressure of 132±1/81±1 mmHg, LDL col 125±2 mg/dl, HbA1c 6±0.1% and TPA 48±10mm2 ( p< 0.05 vs CRF stage 1) while Stage 3 CRF (n=165, 59±1 yo) had a blood pressure of 134±1/81±1, LDL col 125±3 mg/dl, HbA1c 6±0.1% and TPA 71±6mm2 (p < 0.05 vs CRF stage 1 and 2). Conclusion: Our data indicate that TPA increases along the renal function deterioration, and it is not related with the LDL cholesterol and triglycerides levels. We suggest that mechanisms other than the classics are responsible for the observed excess of cardiovascular disease in CKD patients and finally, determination of total plaque area should be used to measure effects of antiatherosclerotic therapy.Keywords: hypertension, chronic renal failure, atherosclerosis, cholesterol
Procedia PDF Downloads 271448 Antihypertensive Effect of Formulated Apium graveolens: A Randomized, Double-Blind, Placebo-Controlled Clinical Trial
Authors: Maryam Shayani Rad, Seyed Ahmad Mohajeri, Mohsen Mouhebati, Seyed Danial Mousavi
Abstract:
High blood pressure is one of the most important and serious health-threatening because of no symptoms in most people, which can lead to sudden heart attack, heart failure, and stroke. Nowadays, herbal medicine is one of the best and safest strategies for treatment that have no adverse effects. Apium graveolens (celery) can be used as an alternative treatment for many health conditions such as hypertension. Natural compounds reduce blood pressure via different mechanisms in which Apium graveolens extract provides potent calcium channel blocking properties. A randomized, double-blind, placebo-controlled, cross-over clinical trial was done to evaluate the efficacy of formulated Apium graveolens extract with a maximum yield of 3-n-butylphthalide to reduce systolic and diastolic blood pressure in patients with hypertension. 54 hypertensive patients in the range of 20-68 years old were randomly assigned to the treatment group (26 cases) and the placebo control group (26 cases) and were crossed over after washout duration. The treatment group received at least 2 grams of formulated powder in hard capsules orally, before each meal, 2 times daily. The control group received 2 grams of placebo in hard capsules orally, exactly as the same as shape, time, and doses of treatment group. Treatment was administrated in 12 weeks with 4 weeks washout period at the middle of the study, meaning 4 weeks drug consumption for the treatment group, 4 weeks washout and 4 weeks placebo consumption, and vice versa for the placebo control group. The clinical assessment was done 4 times, including at the beginning and ending of the drug and placebo consumption period by 24-hour ambulatory blood pressure monitoring (ABPM) holter, which measured blood pressure every 15 minutes continuously. There was a statistically significant decrease in both systolic blood pressure (SBP) and diastolic blood pressure (DBP) at the end of drug duration compared to baseline. The changes after 4 weeks on average was about 12.34 mm Hg for the SBP (P < 0.005) and 7.83 mm Hg for the DBP (P < 0.005). The results from this clinical trial study showed this Apium graveolens extract formulation in the mentioned dosage had a significant effect on blood pressure-lowering for hypertensive patients.Keywords: Apium graveolens extract, clinical trial, cross-over, hypertension
Procedia PDF Downloads 212447 Applications of Hyperspectral Remote Sensing: A Commercial Perspective
Authors: Tuba Zahra, Aakash Parekh
Abstract:
Hyperspectral remote sensing refers to imaging of objects or materials in narrow conspicuous spectral bands. Hyperspectral images (HSI) enable the extraction of spectral signatures for objects or materials observed. These images contain information about the reflectance of each pixel across the electromagnetic spectrum. It enables the acquisition of data simultaneously in hundreds of spectral bands with narrow bandwidths and can provide detailed contiguous spectral curves that traditional multispectral sensors cannot offer. The contiguous, narrow bandwidth of hyperspectral data facilitates the detailed surveying of Earth's surface features. This would otherwise not be possible with the relatively coarse bandwidths acquired by other types of imaging sensors. Hyperspectral imaging provides significantly higher spectral and spatial resolution. There are several use cases that represent the commercial applications of hyperspectral remote sensing. Each use case represents just one of the ways that hyperspectral satellite imagery can support operational efficiency in the respective vertical. There are some use cases that are specific to VNIR bands, while others are specific to SWIR bands. This paper discusses the different commercially viable use cases that are significant for HSI application areas, such as agriculture, mining, oil and gas, defense, environment, and climate, to name a few. Theoretically, there is n number of use cases for each of the application areas, but an attempt has been made to streamline the use cases depending upon economic feasibility and commercial viability and present a review of literature from this perspective. Some of the specific use cases with respect to agriculture are crop species (sub variety) detection, soil health mapping, pre-symptomatic crop disease detection, invasive species detection, crop condition optimization, yield estimation, and supply chain monitoring at scale. Similarly, each of the industry verticals has a specific commercially viable use case that is discussed in the paper in detail.Keywords: agriculture, mining, oil and gas, defense, environment and climate, hyperspectral, VNIR, SWIR
Procedia PDF Downloads 79446 The People's Tribunal: Empowerment by Survivors for Survivors of Child Abuse
Authors: Alan Collins
Abstract:
This study explains how The People’s Tribunal empowered survivors of child abuse. It examines how People’s tribunals can be effective mean of empowerment; the challenges of empowerment – expectation v. reality; the findings and how they reflect other inquiry findings; and the importance of listening and learning from survivors. UKCSAPT “The People’s Tribunal” was established by survivors of child sex abuse and members of civil society to investigate historic cases of institutional sex abuse. The independent inquiry, led by a panel of four judges, listened to evidence spanning four decades from survivors and experts. A common theme throughout these accounts showed that a series of institutional failures prevented abuse from being reported; and that there are clear links between children being rendered vulnerable by these failures and predatory abuse on an organised scale. It made a series of recommendations including the establishment of a permanent and open forum for victims to share experiences and give evidence, better links between mental health services and police investigations, and training for police and judiciary professionals on the effects of undisclosed sexual abuse. The main findings of the UKCSAPT report were:-There are clear links between children rendered vulnerable by institutional failures and predatory abuse on an organised scale, even if these links often remain obscure. -UK governmental institutions have failed to provide survivors with meaningful opportunities for either healing or justice. -The vital mental health needs of survivors are not being met and this undermines both their psychological recovery and access to justice. -Police and other authorities often lack the training to understand the complex reasons for the inability of survivors to immediately disclose a history of abuse. -Without far-reaching changes in institutional culture and practices, the sexual abuse of children will continue to be a significant scourge in the UK. The report also outlined a series of recommendations for improving reporting and mental health provision, and access to justice for victims were made, including: -A permanent, government-funded popular tribunal should be established to enable survivors to come forward and tell their stories. -Survivors giving evidence should be assigned an advocate to assist their access to justice. -Mental health services should be linked to police investigations to help victims disclose abuse. -Victims who fear reprisals should be provided with a channel though which to give evidence anonymously.Keywords: empowerment, survivors, sexual, abuse
Procedia PDF Downloads 257445 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy
Procedia PDF Downloads 343444 Demarcating Wetting States in Pressure-Driven Flows by Poiseuille Number
Authors: Anvesh Gaddam, Amit Agrawal, Suhas Joshi, Mark Thompson
Abstract:
An increase in surface area to volume ratio with a decrease in characteristic length scale, leads to a rapid increase in pressure drop across the microchannel. Texturing the microchannel surfaces reduce the effective surface area, thereby decreasing the pressured drop. Surface texturing introduces two wetting states: a metastable Cassie-Baxter state and stable Wenzel state. Predicting wetting transition in textured microchannels is essential for identifying optimal parameters leading to maximum drag reduction. Optical methods allow visualization only in confined areas, therefore, obtaining whole-field information on wetting transition is challenging. In this work, we propose a non-invasive method to capture wetting transitions in textured microchannels under flow conditions. To this end, we tracked the behavior of the Poiseuille number Po = f.Re, (with f the friction factor and Re the Reynolds number), for a range of flow rates (5 < Re < 50), and different wetting states were qualitatively demarcated by observing the inflection points in the f.Re curve. Microchannels with both longitudinal and transverse ribs with a fixed gas fraction (δ, a ratio of shear-free area to total area) and at a different confinement ratios (ε, a ratio of rib height to channel height) were fabricated. The measured pressure drop values for all the flow rates across the textured microchannels were converted into Poiseuille number. Transient behavior of the pressure drop across the textured microchannels revealed the collapse of liquid-gas interface into the gas cavities. Three wetting states were observed at ε = 0.65 for both longitudinal and transverse ribs, whereas, an early transition occurred at Re ~ 35 for longitudinal ribs at ε = 0.5, due to spontaneous flooding of the gas cavities as the liquid-gas interface ruptured at the inlet. In addition, the pressure drop in the Wenzel state was found to be less than the Cassie-Baxter state. Three-dimensional numerical simulations confirmed the initiation of the completely wetted Wenzel state in the textured microchannels. Furthermore, laser confocal microscopy was employed to identify the location of the liquid-gas interface in the Cassie-Baxter state. In conclusion, the present method can overcome the limitations posed by existing techniques, to conveniently capture wetting transition in textured microchannels.Keywords: drag reduction, Poiseuille number, textured surfaces, wetting transition
Procedia PDF Downloads 161443 A Review of Benefit-Risk Assessment over the Product Lifecycle
Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris
Abstract:
Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches
Procedia PDF Downloads 154442 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures
Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk
Abstract:
From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach
Procedia PDF Downloads 202441 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip
Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar
Abstract:
Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation
Procedia PDF Downloads 186440 Simulation of Antimicrobial Resistance Gene Fate in Narrow Grass Hedges
Authors: Marzieh Khedmati, Shannon L. Bartelt-Hunt
Abstract:
Vegetative Filter Strips (VFS) are used for controlling the volume of runoff and decreasing contaminant concentrations in runoff before entering water bodies. Many studies have investigated the role of VFS in sediment and nutrient removal, but little is known about their efficiency for the removal of emerging contaminants such as antimicrobial resistance genes (ARGs). Vegetative Filter Strip Modeling System (VFSMOD) was used to simulate the efficiency of VFS in this regard. Several studies demonstrated the ability of VFSMOD to predict reductions in runoff volume and sediment concentration moving through the filters. The objectives of this study were to calibrate the VFSMOD with experimental data and assess the efficiency of the model in simulating the filter behavior in removing ARGs (ermB) and tylosin. The experimental data were obtained from a prior study conducted at the University of Nebraska (UNL) Rogers Memorial Farm. Three treatment factors were tested in the experiments, including manure amendment, narrow grass hedges and rainfall events. Sediment Delivery Ratio (SDR) was defined as the filter efficiency and the related experimental and model values were compared to each other. The VFS Model generally agreed with the experimental results and as a result, the model was used for predicting filter efficiencies when the runoff data are not available. Narrow Grass Hedges (NGH) were shown to be effective in reducing tylosin and ARGs concentration. The simulation showed that the filter efficiency in removing ARGs is different for different soil types and filter lengths. There is an optimum length for the filter strip that produces minimum runoff volume. Based on the model results increasing the length of the filter by 1-meter leads to higher efficiency but widening beyond that decreases the efficiency. The VFSMOD, which was proved to work well in estimation of VFS trapping efficiency, showed confirming results for ARG removal.Keywords: antimicrobial resistance genes, emerging contaminants, narrow grass hedges, vegetative filter strips, vegetative filter strip modeling system
Procedia PDF Downloads 132439 Using Digital Innovations to Increase Awareness and Intent to Use Depo-Medroxy Progesterone Acetate-Subcutaneous Contraception among Women of Reproductive Age in Nigeria, Uganda, and Malawi
Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu
Abstract:
Introduction: Digital innovations have been useful in supporting a client’s contraceptive user journey from awareness to method initiation. The concept of contraceptive self-care is being promoted globally as a means for achieving universal access to quality contraceptive care; however, information about this approach is limited. An important determinant of the scale of awareness is the message construct, choice of information channel, and an understanding of the socio-epidemiological dynamics within the target audience. Significant gains have been made recently in expanding the awareness base of DMPA-SC -a relatively new entrant into the family planning method mix. The cornerstone of this success is a multichannel promotion campaign themed Discover your Power (DYP). The DYP campaign combines content marketing across select social media platforms, chatbots, Cyber-IPC, Interactive Voice Response (IVR), and radio campaigns. Methodology: During implementation, the project monitored predefined metrics of awareness and intent, such as the number of persons reached with the messages, the number of impressions, and meaningful engagement (link-clicks). Metrics/indicators are extracted through native insight/analytics tools across the various platforms. The project also enlists community mobilizers (CMs) who go door-to-door and engage WRA to advertise DISC’s online presence and support them to engage with IVR, digital companion (chatbot), Facebook page, and DiscoverYourPower website. Results: The result showed that the digital platforms recorded 242 million impressions and reached 82 million users with key DMPA-SC self-injection messaging in the three countries. As many as 3.4 million persons engaged (liked, clicked, shared, or reposted) digital posts -an indication of intention. Conclusion: Digital solutions and innovations are gradually becoming the archetype for the advancement of the self-care agenda. Digital innovations can also be used to increase awareness and normalize contraceptive self-care behavior amongst women of reproductive age if they are made an integral part of reproductive health programming.Keywords: digital transformation, health systems, DMPA-SC, family planning, self-care
Procedia PDF Downloads 81438 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris
Abstract:
Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging
Procedia PDF Downloads 360437 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions
Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez
Abstract:
In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval
Procedia PDF Downloads 232436 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes
Authors: Chih Ming Tseng
Abstract:
Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation
Procedia PDF Downloads 28435 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow
Authors: Alex Fedoseyev
Abstract:
This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow
Procedia PDF Downloads 61434 Bioavailability Enhancement of Ficus religiosa Extract by Solid Lipid Nanoparticles
Authors: Sanjay Singh, Karunanithi Priyanka, Ramoji Kosuru, Raju Prasad Sharma
Abstract:
Herbal drugs are well known for their mixed pharmacological activities with the benefit of no harmful side effects. The use of herbal drugs is limited because of their higher dose requirement, frequent drug administration, poor bioavailability of phytochemicals and delayed onset of action. Ficus religiosa, a potent anti-oxidant plant useful in the treatment of diabetes and cancer was selected for the study. Solid lipid nanoparticles (SLN) of Ficus religiosa extract was developed for the enhancement in oral bioavailability of stigmasterol and β-sitosterol-d-glucoside, principal components present in the extract. Hot homogenization followed by ultrasonication method was used to develop extract loaded SLN. Developed extract loaded SLN were characterized for particle size, PDI, zeta potential, entrapment efficiency, in vitro drug release and kinetics, fourier transform infra-red spectroscopy, differential scanning calorimetry, powder X-ray diffractrometry and stability studies. Entrapment efficiency of optimized extract loaded SLN was found to be 68.46 % (56.13 % of stigmasterol and 12.33 % of β-sitosteryl-d-glucoside, respectively). RP HPLC method development was done for simultaneous estimation of stigmasterol and β-sitosterol-d-glucoside in Ficus religiosa extract in rat plasma. Bioavailability studies were carried out for extract in suspension form and optimized extract loaded SLN. AUC of stigmasterol and β-sitosterol-d-glucoside were increased by 6.7-folds by 9.2-folds, respectively in rats treated with extract loaded SLN compared to extract suspension. Also, Cmax of stigmasterol and β-sitosterol-d-glucoside were increased by 4.3-folds by 3.9-folds, respectively in rats treated with extract loaded SLN compared to extract suspension. Mean residence times (MRT) for stigmasterol were found to be 12.3 ± 0.67 hours from extract and 7.4 ± 2.1 hours from SLN and for β-sitosterol-d-glucoside, 10.49 ± 2.9 hours from extract and 6.4 ± 0.3 hours from SLN. Hence, it was concluded that SLN enhanced the bioavailability and reduced the MRT of stigmasterol and β-sitosterol-d-glucoside in Ficus religiosa extract which in turn may lead to reduction in dose of Ficus religiosa extract, prolonged duration of action and also enhanced therapeutic efficacy.Keywords: Ficus religiosa, phytosterolins, bioavailability, solid lipid nanoparticles, stigmasterol and β-sitosteryl-d-glucoside
Procedia PDF Downloads 473433 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil
Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins
Abstract:
Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.Keywords: biomonitoring, exposure, PBPK modelling, toxic elements
Procedia PDF Downloads 319