Search results for: reverse time migration (RTM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19014

Search results for: reverse time migration (RTM)

17484 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 93
17483 Investigating Associations Between Genes Linked to Social Behavior and Early Covid-19 Spread Using Multivariate Linear Regression Analysis

Authors: Gwenyth C. Eichfeld

Abstract:

Variation in global COVID-19 spread is partly explained by social and behavioral factors. Many of these behaviors are linked to genetics. The short polymorphism of the 5-HTTLPR promoter region of the SLC6A4 gene is linked to collectivism. The seven-repeat polymorphism of the DRD4 gene is linked to risk-taking, migration, sensation-seeking, and impulsivity. Fewer CAG repeats in the androgen receptor gene are linked to impulsivity. This study investigates an association between the country-level frequency of these variants and early Covid-19 spread. Results of regression analysis indicate a significant association between increased country-wide prevalence of the short allele of the SLC6A4 gene and decreased COVID-19 spread when other factors that have been linked to COVID-19 are controlled for. Additionally, results show that the short allele of the SLC6A4 gene is associated with COVID-19 spread through GDP and percent urbanization rather than collectivism. Results showed no significant association between the frequency of the DRD4 polymorphism nor the androgen receptor polymorphism with early COVID-19 spread.

Keywords: neuroscience, genetics, population sciences, Covid-19

Procedia PDF Downloads 11
17482 Seismic Analysis of Adjacent Buildings Connected with Dampers

Authors: Devyani D. Samarth, Sachin V. Bakre, Ratnesh Kumar

Abstract:

This work deals with two buildings adjacent to each other connected with dampers. The “Imperial Valley Earthquake - El Centro", "May 18, 1940 earthquake time history is used for dynamic analysis of the system in the time domain. The effectiveness of fluid joint dampers is then investigated in terms of the reduction of displacement, acceleration and base shear responses of adjacent buildings. Finally, an extensive parametric study is carried out to find optimum damper properties like stiffness (Kd) and damping coefficient (Cd) for adjacent buildings. Results show that using fluid dampers to connect the adjacent buildings of different fundamental frequencies can effectively reduce earthquake-induced responses of either building if damper optimum properties are selected.

Keywords: energy dissipation devices, time history analysis, viscous damper, optimum parameters

Procedia PDF Downloads 479
17481 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano

Authors: Guo Wenyu, Qu Youli

Abstract:

A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.

Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA

Procedia PDF Downloads 238
17480 PVMODREL© Development Based on Reliability Evaluation of a PV Module Using Accelerated Degradation Testing

Authors: Abderafi Charki, David Bigaud

Abstract:

The aim of this oral speach is to present the PVMODREL© (PhotoVoltaic MODule RELiability) new software developed in the University of Angers. This new tool permits us to evaluate the lifetime and reliability of a PV module whatever its geographical location and environmental conditions. The electrical power output of a PV module decreases with time mainly as a result of the effects of corrosion, encapsulation discoloration, and solder bond failure. The failure of a PV module is defined as the point where the electrical power degradation reaches a given threshold value. Accelerated life tests (ALTs) are commonly used to assess the reliability of a PV module. However, ALTs provide limited data on the failure of a module and these tests are expensive to carry out. One possible solution is to conduct accelerated degradation tests. The Wiener process in conjunction with the accelerated failure time model makes it possible to carry out numerous simulations and thus to determine the failure time distribution based on the aforementioned threshold value. By this means, the failure time distribution and the lifetime (mean and uncertainty) can be evaluated. An example using the damp heat test is shown to demonstrate the usefulness PVMODREL.

Keywords: lifetime, reliability, PV Module, accelerated life testing, accelerated degradation testing

Procedia PDF Downloads 561
17479 Analyzing the Empirical Link between Islamic Finance and Growth of Real Output: A Time Series Application to Pakistan

Authors: Nazima Ellahi, Danish Ramzan

Abstract:

There is a growing trend among development economists regarding the importance of financial sector for economic development and growth activities. The development thus introduced, helps to promote welfare effects and poverty alleviation. This study is an attempt to find the nature of link between Islamic banking financing and development of output growth for Pakistan. Time series data set has been utilized for a time period ranging from 1990 to 2010. Following the Phillip Perron (PP) and Augmented Dicky Fuller (ADF) test of unit root this study applied Ordinary Least Squares (OLS) method of estimation and found encouraging results in favor of promoting the Islamic banking practices in Pakistan.

Keywords: Islamic finance, poverty alleviation, economic growth, finance, commerce

Procedia PDF Downloads 330
17478 Clustering Color Space, Time Interest Points for Moving Objects

Authors: Insaf Bellamine, Hamid Tairi

Abstract:

Detecting moving objects in sequences is an essential step for video analysis. This paper mainly contributes to the Color Space-Time Interest Points (CSTIP) extraction and detection. We propose a new method for detection of moving objects. Two main steps compose the proposed method. First, we suggest to apply the algorithm of the detection of Color Space-Time Interest Points (CSTIP) on both components of the Color Structure-Texture Image Decomposition which is based on a Partial Differential Equation (PDE): a color geometric structure component and a color texture component. A descriptor is associated to each of these points. In a second stage, we address the problem of grouping the points (CSTIP) into clusters. Experiments and comparison to other motion detection methods on challenging sequences show the performance of the proposed method and its utility for video analysis. Experimental results are obtained from very different types of videos, namely sport videos and animation movies.

Keywords: Color Space-Time Interest Points (CSTIP), Color Structure-Texture Image Decomposition, Motion Detection, clustering

Procedia PDF Downloads 366
17477 Injunctions, Disjunctions, Remnants: The Reverse of Unity

Authors: Igor Guatelli

Abstract:

The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.

Keywords: clearing, interstice, negative, remnant, spectrum

Procedia PDF Downloads 125
17476 Application of Global Predictive Real Time Control Strategy to Improve Flooding Prevention Performance of Urban Stormwater Basins

Authors: Shadab Shishegar, Sophie Duchesne, Genevieve Pelletier

Abstract:

Sustainability as one of the key elements of Smart cities, can be realized by employing Real Time Control Strategies for city’s infrastructures. Nowadays Stormwater management systems play an important role in mitigating the impacts of urbanization on natural hydrological cycle. These systems can be managed in such a way that they meet the smart cities standards. In fact, there is a huge potential for sustainable management of urban stormwater and also its adaptability to global challenges like climate change. Hence, a dynamically managed system that can adapt itself to instability of the environmental conditions is desirable. A Global Predictive Real Time Control approach is proposed in this paper to optimize the performance of stormwater management basins in terms of flooding prevention. To do so, a mathematical optimization model is developed then solved using Genetic Algorithm (GA). Results show an improved performance at system-level for the stormwater basins in comparison to static strategy.

Keywords: environmental sustainability, optimization, real time control, storm water management

Procedia PDF Downloads 165
17475 Queueing Modeling of M/G/1 Fault Tolerant System with Threshold Recovery and Imperfect Coverage

Authors: Madhu Jain, Rakesh Kumar Meena

Abstract:

This paper investigates a finite M/G/1 fault tolerant multi-component machining system. The system incorporates the features such as standby support, threshold recovery and imperfect coverage make the study closer to real time systems. The performance prediction of M/G/1 fault tolerant system is carried out using recursive approach by treating remaining service time as a supplementary variable. The numerical results are presented to illustrate the computational tractability of analytical results by taking three different service time distributions viz. exponential, 3-stage Erlang and deterministic. Moreover, the cost function is constructed to determine the optimal choice of system descriptors to upgrading the system.

Keywords: fault tolerant, machine repair, threshold recovery policy, imperfect coverage, supplementary variable technique

Procedia PDF Downloads 280
17474 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang

Abstract:

Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.

Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing

Procedia PDF Downloads 58
17473 Housing and Urban Refugee: An Introspective Study on Bihari Camp of Mirpur, Dhaka

Authors: Fahmida Nusrat, Sumaia Nasrin, Pinak Sarker

Abstract:

Biharis as an urban refugee are a significant urban dweller in Dhaka since their forced migration on the partition of 1947. There are many such refugee settlements in Bangladesh, particularly in Dhaka where they often live in dire conditions, facing discrimination from mainstream society. Their camps have become slums. Housing for urban refugee is still not a strategic concern for overall housing policy of Dhaka. The study has been conducted in a significant refugee settlement located in Mirpur-11, Dhaka, to observe their way of living in these camps to understand the socio-cultural aspects that are shaping their settlement morphology, hence to identify the key issues of their built environment to suggest an inclusive and sustainable housing solution for improving their life in urban environment. The methods included first-hand data collection on their household spaces and community spaces accompanied with the overall spatial organization of the settlement pattern which later on followed by a semi-structured interview with randomly selected samples from the camp dwellers to get users’ feedback on the research aspects. The outcome of the study will help initiating housing strategies as well as formulating design issues for this case specific inhabitants of urban Dhaka.

Keywords: Bihari camp, Dhaka, housing strategy, the way of living, urban refugee

Procedia PDF Downloads 156
17472 Real Time Lidar and Radar High-Level Fusion for Obstacle Detection and Tracking with Evaluation on a Ground Truth

Authors: Hatem Hajri, Mohamed-Cherif Rahal

Abstract:

Both Lidars and Radars are sensors for obstacle detection. While Lidars are very accurate on obstacles positions and less accurate on their velocities, Radars are more precise on obstacles velocities and less precise on their positions. Sensor fusion between Lidar and Radar aims at improving obstacle detection using advantages of the two sensors. The present paper proposes a real-time Lidar/Radar data fusion algorithm for obstacle detection and tracking based on the global nearest neighbour standard filter (GNN). This algorithm is implemented and embedded in an automative vehicle as a component generated by a real-time multisensor software. The benefits of data fusion comparing with the use of a single sensor are illustrated through several tracking scenarios (on a highway and on a bend) and using real-time kinematic sensors mounted on the ego and tracked vehicles as a ground truth.

Keywords: ground truth, Hungarian algorithm, lidar Radar data fusion, global nearest neighbor filter

Procedia PDF Downloads 153
17471 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 60
17470 Effect of Feed Supplement Optipartum C+ 200 (Alfa- Amylase and Beta-Glucanase) in In-Line Rumination Parameters

Authors: Ramūnas Antanaitis, Lina Anskienė, Robertas Stoškus

Abstract:

This study was conducted during 2021.05.01 – 2021.08.31 at the Lithuanian University of health sciences and one Lithuanian dairy farm with 500 dairy cows (55.911381565736, 21.881321760608195). Average calving – 50 cows per month. Cows (n=20) in the treatment group (TG) were fed with feed supplement Optipartum C+ 200 (Enzymes: Alfa- Amylase 57 Units; Beta-Glucanase 107 Units) from 21 days before calving till 30 days after calving with feeding rate 200g/cow/day. Cows in the control group (CG) were fed a feed ration without feed supplement. Measurements started from 6 days before calving and continued till 21 days after calving. The following indicators were registered: with the RumiWatch System: Rumination time; Eating time; Drinking time; Rumination chews; Eating chews; Drinking gulps; Bolus; Chews per minute; Chews per bolus. With SmaXtec system - the temperature, pH of the contents of cows' reticulorumens and cows' activity. According to our results, we found that feeding of cows, from 21 days before calving to 30 days after calving, with a feed supplement with alfa- amylase and beta-glucanase (Optipartum C+ 200) (with dose 200g/cow/day) can produce an increase in: 9% rumination time and eating time, 19% drinking time, 11% rumination chews, 16% eating chews,13% number of boluses per rumination, 5% chews per minute and 16% chews per bolus. We found 1.28 % lower reiticulorumen pH and 0.64% lower reticulorumen temperature in cows fed with the supplement compared with control group cows. Also, cows feeding with enzymes were 8.80% more active.

Keywords: Alfa-Amylase, Beta-Glucanase, cows, in-line, sensors

Procedia PDF Downloads 308
17469 Thermal Behavior of Green Roof: Case Study at Seoul National University Retentive Green Roof

Authors: Theresia Gita Hapsari

Abstract:

There has been major concern about urban heating as urban clusters emerge and population migration from rural to urban areas continues. Green roof has been one of the main practice for urban heat island mitigation for the past decades, thus, this study was conducted to predict the cooling potential of retentive green roof in mitigating urban heat island. Retentive green roof was developed by Han in 2010. It has 320 mm height of retention wall surrounding the vegetation and 65mm depth of retention board underneath the soil, while most conventional green roof doesn’t have any retention wall and only maximum of 25 mm depth of drainage board. Seoul National University retentive green roof significantly reduced sensible heat movement towards the air by 0.5 kWh/m2, and highly enhanced the evaporation process as much as 0.5 – 5.4 kg/m2 which equals to 0.3 – 3.6 kWh/m2 of latent heat flux. These results indicate that with design enhancement, serving as a viable alternate for conventional green roof, retentive green roof contributes to overcome the limitation of conventional green roof which is the main solution for mitigating urban heat island.

Keywords: green roof, low impact development, retention board, thermal behavior, urban heat island

Procedia PDF Downloads 269
17468 Theoretical Appraisal of Satisfactory Decision: Uncertainty, Evolutionary Ideas and Beliefs, Satisfactory Time Use

Authors: Okay Gunes

Abstract:

Unsatisfactory experiences due to an information shortage regarding the future pay-offs of actual choices, yield satisficing decision-making. This research will examine, for the first time in the literature, the motivation behind suboptimal decisions due to uncertainty by subjecting Adam Smith’s and Jeremy Bentham’s assumptions about the nature of the actions that lead to satisficing behavior, in order to clarify the theoretical background of a “consumption-based satisfactory time” concept. The contribution of this paper with respect to the existing literature is threefold: Firstly, it is showed in this paper that Adam Smith’s uncertainty is related to the problem of the constancy of ideas and not related directly to beliefs. Secondly, possessions, as in Jeremy Bentham’s oeuvre, are assumed to be just as pleasing, as protecting and improving the actual or expected quality of life, so long as they reduce any displeasure due to the undesired outcomes of uncertainty. Finally, each consumption decision incurs its own satisfactory time period, owed to not feeling hungry, being healthy, not having transportation…etc. This reveals that the level of satisfaction is indeed a behavioral phenomenon where its value would depend on the simultaneous satisfaction derived from all activities.

Keywords: decision-making, idea and belief, satisficing, uncertainty

Procedia PDF Downloads 272
17467 Households’ Willingness to Pay for Watershed Management Practices in Lake Hawassa Watershed, Southern Ethiopia

Authors: Mulugeta Fola, Mengistu Ketema, Kumilachew Alamerie

Abstract:

Watershed provides vast economic benefits within and beyond the management area of interest. But most watersheds in Ethiopia are increasingly facing the threats of degradation due to both natural and man-made causes. To reverse these problems, communities’ participation in sustainable management programs is among the necessary measures. Hence, this study assessed the households’ willingness to pay for watershed management practices through a contingent valuation study approach. Double bounded dichotomous choice with open-ended follow-up format was used to elicit the households’ willingness to pay. Based on data collected from 275 randomly selected households, descriptive statistics results indicated that most households (79.64%) were willing to pay for watershed management practices. A bivariate Probit model was employed to identify determinants of households’ willingness to pay and estimate mean willingness to pay. Its result shows that age, gender, income, livestock size, perception of watershed degradation, social position, and offered bids were important variables affecting willingness to pay for watershed management practices. The study also revealed that the mean willingness to pay for watershed management practices was calculated to be 58.41 Birr and 47.27 Birr per year from the double bounded and open-ended format, respectively. The study revealed that the aggregate welfare gains from watershed management practices were calculated to be 931581.09 Birr and 753909.23 Birr per year from double bounded dichotomous choice and open-ended format, respectively. Therefore, the policymakers should make households to pay for the services of watershed management practices in the study area.

Keywords: bivariate probit model, contingent valuation, watershed management practices, willingness to pay

Procedia PDF Downloads 207
17466 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 79
17465 Part Performance Improvement through Design Optimisation of Cooling Channels in the Injection Moulding Process

Authors: M. A. Alhubail, A. I. Alateyah, D. Alenezi, B. Aldousiri

Abstract:

In this study conformal cooling channel (CCC) was employed to dissipate heat of, Polypropylene (PP) parts injected into the Stereolithography (SLA) insert to form tensile and flexural test specimens. The direct metal laser sintering (DMLS) process was used to fabricate a mould with optimised CCC, while optimum parameters of injection moulding were obtained using Optimal-D. The obtained results show that optimisation of the cooling channel layout using a DMLS mould has significantly shortened cycle time without sacrificing the part’s mechanical properties. By applying conformal cooling channels, the cooling time phase was reduced by 20 seconds, and also defected parts were eliminated.

Keywords: optimum parameters, injection moulding, conformal cooling channels, cycle time

Procedia PDF Downloads 214
17464 Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory

Authors: Ömer Aktaş, Olga A. Suvorova, Oleg Tretyakov

Abstract:

The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.

Keywords: analytical regularization method, electromagnetic theory evolutionary equations of time-domain, TM Field

Procedia PDF Downloads 485
17463 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done

Authors: Geir Kirkebøen

Abstract:

Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.

Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis

Procedia PDF Downloads 310
17462 A Simple Recursive Framework to Generate Gray Codes for Weak Orders in Constant Amortized Time

Authors: Marsden Jacques, Dennis Wong

Abstract:

A weak order is a way to rank n objects where ties are allowed. In this talk, we present a recursive framework to generate Gray codes for weak orders. We then describe a simple algorithm based on the framework that generates 2-Gray codes for weak orders in constant amortized time per string. This framework can easily be modified to generate other Gray codes for weak orders. We provide an example on using the framework to generate the first Shift Gray code for weak orders, also in constant amortized time, where consecutive strings differ by a shift or a symbol change.

Keywords: weak order, Cayley permutation, Gray code, shift Gray code

Procedia PDF Downloads 158
17461 Mitigation of Indoor Human Exposure to Traffic-Related Fine Particulate Matter (PM₂.₅)

Authors: Ruchi Sharma, Rajasekhar Balasubramanian

Abstract:

Motor vehicles emit a number of air pollutants, among which fine particulate matter (PM₂.₅) is of major concern in cities with high population density due to its negative impacts on air quality and human health. Typically, people spend more than 80% of their time indoors. Consequently, human exposure to traffic-related PM₂.₅ in indoor environments has received considerable attention. Most of the public residential buildings in tropical countries are designed for natural ventilation where indoor air quality tends to be strongly affected by the migration of air pollutants of outdoor origin. However, most of the previously reported traffic-related PM₂.₅ exposure assessment studies relied on ambient PM₂.₅ concentrations and thus, the health impact of traffic-related PM₂.₅ on occupants in naturally ventilated buildings remains largely unknown. Therefore, a systematic field study was conducted to assess indoor human exposure to traffic-related PM₂.₅ with and without mitigation measures in a typical naturally ventilated residential apartment situated near a road carrying a large volume of traffic. Three PM₂.₅ exposure scenarios were simulated in this study, i.e., Case 1: keeping all windows open with a ceiling fan on as per the usual practice, Case 2: keeping all windows fully closed as a mitigation measure, and Case 3: keeping all windows fully closed with the operation of a portable indoor air cleaner as an additional mitigation measure. The indoor to outdoor (I/O) ratios for PM₂.₅ mass concentrations were assessed and the effectiveness of using the indoor air cleaner was quantified. Additionally, potential human health risk based on the bioavailable fraction of toxic trace elements was also estimated for the three cases in order to identify a suitable mitigation measure for reducing PM₂.₅ exposure indoors. Traffic-related PM₂.₅ levels indoors exceeded the air quality guidelines (12 µg/m³) in Case 1, i.e., under natural ventilation conditions due to advective flow of outdoor air into the indoor environment. However, while using the indoor air cleaner, a significant reduction (p < 0.05) in the PM₂.₅ exposure levels was noticed indoors. Specifically, the effectiveness of the air cleaner in terms of reducing indoor PM₂.₅ exposure was estimated to be about 74%. Moreover, potential human health risk assessment also indicated a substantial reduction in potential health risk while using the air cleaner. This is the first study of its kind that evaluated the indoor human exposure to traffic-related PM₂.₅ and identified a suitable exposure mitigation measure that can be implemented in densely populated cities to realize health benefits.

Keywords: fine particulate matter, indoor air cleaner, potential human health risk, vehicular emissions

Procedia PDF Downloads 113
17460 Time-Domain Expressions for Bridge Self-Excited Aerodynamic Forces by Modified Particle Swarm Optimizer

Authors: Hao-Su Liu, Jun-Qing Lei

Abstract:

This study introduces the theory of modified particle swarm optimizer and its application in time-domain expressions for bridge self-excited aerodynamic forces. Based on the indicial function expression and the rational function expression in time-domain expression for bridge self-excited aerodynamic forces, the characteristics of the two methods, i.e. the modified particle swarm optimizer and conventional search method, are compared in flutter derivatives’ fitting process. Theoretical analysis and numerical results indicate that adopting whether the indicial function expression or the rational function expression, the fitting flutter derivatives obtained by modified particle swarm optimizer have better goodness of fit with ones obtained from experiment. As to the flutter derivatives which have higher nonlinearity, the self-excited aerodynamic forces, using the flutter derivatives obtained through modified particle swarm optimizer fitting process, are much closer to the ones simulated by the experimental. The modified particle swarm optimizer was used to recognize the parameters of time-domain expressions for flutter derivatives of an actual long-span highway-railway truss bridge with double decks at the wind attack angle of 0°, -3° and +3°. It was found that this method could solve the bounded problems of attenuation coefficient effectively in conventional search method, and had the ability of searching in unboundedly area. Accordingly, this study provides a method for engineering industry to frequently and efficiently obtain the time-domain expressions for bridge self-excited aerodynamic forces.

Keywords: time-domain expressions, bridge self-excited aerodynamic forces, modified particle swarm optimizer, long-span highway-railway truss bridge

Procedia PDF Downloads 304
17459 Congestion Mitigation on an Urban Arterial through Infrastructure Intervention

Authors: Attiq Ur Rahman Dogar, Sohaib Ishaq

Abstract:

Pakistan had experienced rapid motorization in the last decade. Due to the soft leasing schemes of banks and increase in average household income, even the middle class can now afford cars. The public transit system is inadequate and sparse. Due to these reasons, traffic demand on urban arterials has increased manifold. Poor urban transit planning and aging transportation systems have resulted in traffic congestion. The focus of this study is to improve traffic flow on a section of N-5 passing through the Rawalpindi downtown. Present efforts aim to carry out the analysis of traffic conditions on this section and to investigate the impact of traffic signal co-ordination on travel time. In addition to signal co-ordination, we also examined the effect of different infrastructure improvements on the travel time. After the economic analysis of alternatives and discussions, the improvement plan for Rawalpindi downtown urban arterial section is proposed for implementation.

Keywords: signal coordination, infrastructure intervention, infrastructure improvement, cycle length, fuel consumption cost, travel time cost, economic analysis, travel time, Rawalpindi, Pakistan, traffic signals

Procedia PDF Downloads 305
17458 The Role of Estradiol-17β and Type IV Collagen on the Regulation and Expression Level Of C-Erbb2 RNA and Protein in SKOV-3 Ovarian Cancer Cell Line

Authors: Merry Meryam Martgrita, Marselina Irasonia Tan

Abstract:

One of several aggresive cancer is cancer that overexpress c-erbB2 receptor along with the expression of estrogen receptor. Components of extracellular matrix play an important role to increase cancer cells proliferation, migration and invasion. Both components can affect cancer development by regulating the signal transduction pathways in cancer cells. In recent research, SKOV-3 ovarian cancer cell line, that overexpress c-erbB2 receptor was cultured on type IV collagen and treated with estradiol-17β, to reveal the role of both components on RNA and protein level of c-erbB2 receptor. In this research we found a modulation phenomena of increasing and decreasing of c-erbB2 RNA level and a stabilisation phenomena of c-erbB2 protein expression due to estradiol-17β and type IV collagen. It seemed that estradiol-17β has an important role to increase c-erbB2 transcription and the stability of c-erbB2 protein expression. Type IV collagen has an opposite role. It blocked c-erbB2 transcription when it bound to integrin receptor in SKOV-3 cells.

Keywords: c-erbB2, estradiol-17β, SKOV-3, type IV collagen

Procedia PDF Downloads 272
17457 The Use of Energy Efficiency and Renewable Energy in Building for Sustainable Development

Authors: Zakariya B. H., Idris M. I., Jungudo M. A.

Abstract:

High energy consumptions of urban settlements in Nigeria are escalating due to strong population growth and migration as a result of crises. The demand for lighting, heating, ventilation and air conditioning (LHVAC) is becoming higher. Conversely, there is a poor electricity supply to both rural and urban settlement in Nigeria. Generators were mostly used in Nigeria as a source of energy for LHVAC. Energy efficiency can be defined as any measure taken to reduce the amount of energy consumed for heating ventilation and air-conditioning (HVAC), and house hold appliances like computers, stoves, refrigerators, televisions etc. The aim of the study was to minimize energy consumption in building through the integration of energy efficiency and renewable energy in building sector. Some of the energy efficient buildings within the study area were identified, the study covers there major cities of Nigeria namely, Abuja, Kaduna and Lagos city. The cost of investment on the energy efficiency and renewable energy was determined and compared with other fossil energy source for conventional building. Findings revealed that the low energy and energy efficient buildings in Nigeria are cheaper than the conventional ones. Based on the finding of the research, construction stake holders are strongly encouraged to abandon the conventional buildings and consider energy efficiency and renewable energy in buildings.

Keywords: energy, efficiency, LHVAC, sustainable development

Procedia PDF Downloads 564
17456 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems

Authors: Yong-Kyu Jung

Abstract:

The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).

Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity

Procedia PDF Downloads 68
17455 Impact of Global Warming on the Total Flood Duration and Flood Recession Time in the Meghna Basin Using Hydrodynamic Modelling

Authors: Karan Gupta

Abstract:

The floods cause huge loos each year, and their impact gets manifold with the increase of total duration of flood as well as recession time. Moreover, floods have increased in recent years due to climate change in floodplains. In the context of global climate change, the agreement in Paris convention (2015) stated to keep the increase in global average temperature well below 2°C and keep it at the limit of 1.5°C. Thus, this study investigates the impact of increasing temperature on the stage, discharge as well as total flood duration and recession time in the Meghna River basin in Bangladesh. This study considers the 100-year return period flood flows in the Meghna river under the specific warming levels (SWLs) of 1.5°C, 2°C, and 4°C. The results showed that the rate of increase of duration of flood is nearly 50% lesser at ∆T = 1.5°C as compared to ∆T = 2°C, whereas the rate of increase of duration of recession is 75% lower at ∆T = 1.5°C as compared to ∆T = 2°C. Understanding the change of total duration of flood as well as recession time of the flood gives a better insight to effectively plan for flood mitigation measures.

Keywords: flood, climate change, Paris convention, Bangladesh, inundation duration, recession duration

Procedia PDF Downloads 129